In the context of large language models, what does extending the context window via positional interpolation typically involve?
A) Increasing the model’s capacity by adding more layers to the neural network.
B) Adjusting the input sequence length by interpolating positional encodings to handle longer texts.
C) Decreasing the number of tokens processed to improve computational efficiency.
D) Replacing the positional encodings with random noise to test model robustness.