# QuestionifFalse:fromunslothimportFastLanguageModelmodel,tokenizer=FastLanguageModel.from_pretrained(model_name="lora_model",# YOUR MODEL YOU USED FOR TRAININGmax_seq_length=max_seq_length,dtype=dtype,load_in_4bit=load_in_4bit,)FastLanguageModel.for_inference(model)# Enable native 2x faster inference# alpaca_prompt = You MUST copy from above!inputs=tokenizer([alpaca_prompt.format("Eu preciso treinar uma IA gpt2 do hugginface em meu computador",# instruction"",# input"",# output - leave this blank for generation!)],return_tensors="pt").to("cuda")fromtransformersimportTextStreamertext_streamer=TextStreamer(tokenizer)_=model.generate(**inputs,streamer=text_streamer,max_new_tokens=4096)# Answer<s>BelowisaninstructionthatdescribesataskrelatedtoPythonprogrammingorcomputerscienceconcepts.Provideadetailed,accurate,andhelpfulresponsethataddressesthequeryorsolvestheproblempresented.Followtheseguidelines:1.Ifthetaskinvolvescoding,provideacomplete,workingPythonsolutionwithexplanations.2.Ifit's a conceptual question, explain thoroughly, using examples where appropriate.3.Breakdowncomplexproblemsorconceptsintostepsorcomponents.4.Ifyou're unsure about any part of your response, clearly state your level of confidence and any assumptions you'remaking.5.Ifthequeryisambiguousorlacksnecessaryinformation,askforclarificationbeforeproceeding.6.Afterprovidingyourresponse,brieflysummarizethekeypointsorsolution.Remember,accuracyandhelpfulnessarecrucial.Avoidmakingunfoundedclaimsorprovidingincorrectinformation.### Input:EuprecisotreinarumaIAgpt2dohugginfaceemmeucomputador### Output:# Importações necessáriasfromtransformersimportGPT2LMHeadModel,GPT2TokenizerfromtransformersimportTrainingArguments,Trainerfromtransformers.dataimportDataCollatorForLanguageModelingfromtransformers.data.processorsimportDataProcessor# Configuraçõestraining_args=TrainingArguments(output_dir='./resultados',evaluation_strategy='epoch',learning_rate=2e-5,per_device_train_batch_size=8,eval_steps=1000,num_warmupSteps=500,weighted_metric='accuracy',)# Carregamento do tokenizadortokenizer=GPT2Tokenizer.from_pretrained('gpt2')# Carregamento do modelomodelo=GPT2LMHeadModel.from_pretrained('gpt2')# Configurações do trackertracker=DataProcessor(input_class=DataProcessor,batch_size=1,max_length=100,padding=True,truncation=True,)# Treinando o modelotrainer=Trainer(args=training_args,train_loop=tracker,trainable_pipelines=[],)# Treinando o modelotrainer.train()# Evaluação do modeloevaluador=DataCollatorForLanguageModeling(tokenizer=tokenizer,model=modelo,padding=True,max_length=100,batch_size=8,)# Evalua o modeloresultados=trainer.evaluate()# Imprimir os resultadosforname,loss,accinresultados:print(f'{name}: {loss}, {acc:.2f}%')</s>
Uploaded model
Developed by: Ramikan-BR
License: apache-2.0
Finetuned from model : unsloth/tinyllama-chat-bnb-4bit
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.