24 lines
1.3 KiB
Markdown
24 lines
1.3 KiB
Markdown
|
|
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
|
||
|
|
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||
|
|
the License. You may obtain a copy of the License at
|
||
|
|
http://www.apache.org/licenses/LICENSE-2.0
|
||
|
|
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||
|
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||
|
|
specific language governing permissions and limitations under the License.
|
||
|
|
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
|
||
|
|
rendered properly in your Markdown viewer.
|
||
|
|
-->
|
||
|
|
|
||
|
|
# PEFT
|
||
|
|
|
||
|
|
The [`~integrations.PeftAdapterMixin`] provides functions from the [PEFT](https://huggingface.co/docs/peft/index) library for managing adapters with Transformers. This mixin currently supports LoRA, IA3, and AdaLora. Prefix tuning methods (prompt tuning, prompt learning) aren't supported because they can't be injected into a torch module.
|
||
|
|
|
||
|
|
[[autodoc]] integrations.PeftAdapterMixin
|
||
|
|
- load_adapter
|
||
|
|
- add_adapter
|
||
|
|
- set_adapter
|
||
|
|
- disable_adapters
|
||
|
|
- enable_adapters
|
||
|
|
- active_adapters
|
||
|
|
- get_adapter_state_dict
|