Use getattr with default True for MPTConfig.tie_word_embeddings,
as some MPT model configs lack this attribute.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The previous commit added a warning log for skipping unknown weights
(e.g. embed_tokens.biases) but missed importing the logger.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
In vLLM v0.6.2, ParallelLMHead.forward() raises RuntimeError since
its weights should be used through LogitsProcessor.linear_method.apply().
Pass lm_head as first arg to LogitsProcessor which handles the
hidden_states -> logits projection internally.