large-language-model Questions
1
I'm trying a simple demo where I give the LLM a document and ask it to answer a few things from the document. So far I've had very little success.
My prompt is various versions of the followi...
Galcha asked 22/5 at 10:14
6
Solved
Context: I am trying to query Llama-2 7B, taken from HuggingFace (meta-llama/Llama-2-7b-hf). I give it a question and context (I would guess anywhere from 200-1000 tokens), and ask it to answer the...
Hillis asked 26/7, 2023 at 14:46
3
Solved
I tried executing a langchain agent. I want to save the output from verbose into a variable, but all I can access from the agent.run is only the final answer.
How can I save the verbose output to a...
Garratt asked 5/6, 2023 at 11:31
5
I am trying to fine tune a LLM
My code so far:
from datasets import load_dataset, DatasetDict, Dataset
from transformers import (
AutoTokenizer,
AutoConfig,
AutoModelForSequenceClassification,...
Somerset asked 16/12, 2023 at 14:30
3
I am in the process of learning and developing a chatbot using Gemini Pro. My previous experience includes extensive use of the GPT API, where I became familiar with a concept of "system promp...
Ginseng asked 17/2 at 1:36
2
I accessed a Llama-based model on Huggingface named: "LeoLM/leo-hessianai-7b-chat".
I downloaded the model on my Mac with the device set as 'MPS'. The download worked, however when I want...
Pachton asked 25/10, 2023 at 11:45
7
I'm creating a conversation like so:
llm = ChatOpenAI(temperature=0, openai_api_key=OPENAI_API_KEY, model_name=OPENAI_DEFAULT_MODEL)
conversation = ConversationChain(llm=llm, memory=ConversationBuf...
Gorden asked 8/4, 2023 at 13:51
1
I'd like to finetune Starcoder (https://huggingface.co/bigcode/starcoder) on my dataset and on a GCP VM instance.
It's says in the documentation that for training the model, they used 512 Tesla A10...
Tiffanytiffi asked 1/6, 2023 at 17:22
6
Solved
I am trying to use a custom embedding model in Langchain with chromaDB. I can't seem to find a way to use the base embedding class without having to use some other provider (like OpenAIEmbeddings o...
Protestation asked 2/10, 2023 at 16:51
1
I've previously built a pdf searching tool in LangChain which uses the
chain.run(input_documents=, question=) syntax to ask the model questions along with context from that pdf.
I want to integrate...
Cliff asked 19/7, 2023 at 10:20
4
What is the difference between instruction tuning and normal fine-tuning for large language models?
Also the instruction-tuning I'm referring to isn't the in-context/prompt one.
All the recent pape...
Piker asked 11/6, 2023 at 15:37
5
Solved
I've searched all over langchain documentation on their official website but I didn't find how to create a langchain doc from a str variable in python so I searched in their GitHub code and I found...
Homogenous asked 25/6, 2023 at 15:9
8
this is my code:
import os
from dotenv import load_dotenv,find_dotenv
load_dotenv(find_dotenv())
print(os.environ.get("OPEN_AI_KEY"))
from langchain.llms import OpenAI
llm=OpenAI(model_...
Exhilaration asked 31/7, 2023 at 2:38
4
Solved
I'm currently working on developing a chatbot powered by a Large Language Model (LLM), and I want it to provide responses based on my own documents. I understand that using a fine-tuned model...
Brochure asked 28/8, 2023 at 7:22
1
I have 10 code repositories in Javascript (VueJS) (Each repository corresponds to 1 Theme)
I want to train an LLM model on these 10 code repositories to generate new themes using prompts.
The LLM m...
Ovolo asked 14/6, 2023 at 7:54
1
I tried to create a sarcastic AI chatbot that can mock the user with Ollama and Langchain, and I want to be able to change the LLM running in Ollama without changing my Langchain logic.
The problem...
Iraqi asked 26/11, 2023 at 4:57
2
Basically I want to achieve this with Flask and LangChain: https://www.youtube.com/watch?v=x8uwwLNxqis.
I'm building a Q&A Flask app that uses LangChain in the backend, but I'm having trouble t...
Carven asked 24/3, 2023 at 20:13
3
I am trying to create an customer support system using langchain. I am using text documents as external knowledge provider via TextLoader
In order to remember the chat I using ConversationalRetriev...
Studdingsail asked 16/5, 2023 at 14:26
4
Is there any way of getting sentence embeddings from meta-llama/Llama-2-13b-chat-hf from huggingface?
Model link: https://huggingface.co/meta-llama/Llama-2-13b-chat-hf
I tried using transfomer.Auto...
Beekman asked 18/8, 2023 at 1:59
2
I am trying to run Llama 2.0 on my computer with server and it warns me that my speed is going to be less as I am making some mistake which I am unaware of, however, it works and I dont know how to...
Hippocrene asked 16/10, 2023 at 10:38
1
SBERT's (https://www.sbert.net/) sentence-transformer library (https://pypi.org/project/sentence-transformers/) is the most popular library for producing vector embeddings of text chunks in the Pyt...
Lethargic asked 29/9, 2023 at 22:53
0
I am using langchain with Open ai GPT-3.5. I am using agents to send user's queries to specific tools and I am getting output responses through my agent. Now I want the output response to be JSON b...
Saville asked 6/9, 2023 at 20:49
0
I have around 30 GB of JSON data with multiple files, wanted build query bot on this.
I have built same with text file but i am not sure how it will work for JSON data.
I have explored JSONLoader b...
Poleaxe asked 3/9, 2023 at 5:46
2
I would like to pass to the retriever a similarity threshold. So far I could only figure out how to pass a k value but this was not what I wanted. How can I pass a threshold instead?
from langchain...
Kneeland asked 20/7, 2023 at 12:18
2
Solved
I have three questions :
Given count of LLM parameters in Billions, how can you figure how much GPU RAM do you need to run the model ?
If you have enough CPU-RAM (i.e. no GPU) can you run the...
Avon asked 15/5, 2023 at 14:57
1 Next >
© 2022 - 2024 — McMap. All rights reserved.