init
This commit is contained in:
41
transformers/docs/source/es/bertology.md
Normal file
41
transformers/docs/source/es/bertology.md
Normal file
@@ -0,0 +1,41 @@
|
||||
<!--Copyright 2020 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
|
||||
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
|
||||
rendered properly in your Markdown viewer.
|
||||
|
||||
-->
|
||||
|
||||
# BERTología
|
||||
|
||||
Hay un creciente campo de estudio empeñado en la investigación del funcionamiento interno de los transformers de gran escala como BERT
|
||||
(que algunos llaman "BERTología"). Algunos buenos ejemplos de este campo son:
|
||||
|
||||
|
||||
- BERT Rediscovers the Classical NLP Pipeline por Ian Tenney, Dipanjan Das, Ellie Pavlick:
|
||||
https://huggingface.co/papers/1905.05950
|
||||
- Are Sixteen Heads Really Better than One? por Paul Michel, Omer Levy, Graham Neubig: https://huggingface.co/papers/1905.10650
|
||||
- What Does BERT Look At? An Analysis of BERT's Attention por Kevin Clark, Urvashi Khandelwal, Omer Levy, Christopher D.
|
||||
Manning: https://huggingface.co/papers/1906.04341
|
||||
- CAT-probing: A Metric-based Approach to Interpret How Pre-trained Models for Programming Language Attend Code Structure: https://huggingface.co/papers/2210.04633
|
||||
|
||||
Para asistir al desarrollo de este nuevo campo, hemos incluido algunas features adicionales en los modelos BERT/GPT/GPT-2 para
|
||||
ayudar a acceder a las representaciones internas, principalmente adaptado de la gran obra de Paul Michel
|
||||
(https://huggingface.co/papers/1905.10650):
|
||||
|
||||
|
||||
- accediendo a todos los hidden-states de BERT/GPT/GPT-2,
|
||||
- accediendo a todos los pesos de atención para cada head de BERT/GPT/GPT-2,
|
||||
- adquiriendo los valores de salida y gradientes de las heads para poder computar la métrica de importancia de las heads y realizar la poda de heads como se explica
|
||||
en https://huggingface.co/papers/1905.10650.
|
||||
|
||||
Para ayudarte a entender y usar estas features, hemos añadido un script específico de ejemplo: [bertology.py](https://github.com/huggingface/transformers-research-projects/tree/main/bertology/run_bertology.py) mientras extraes información y cortas un modelo pre-entrenado en
|
||||
GLUE.
|
||||
Reference in New Issue
Block a user