INITIALIZATION Knowledgebase: ki-dev-large Base Query: "Who is the customer? Who is the end customer? Where is the installation location of the system (location, end customer's factory, building)? What is the hall height of the factory or building in which the system is to be installed? Sales Manager: Who is the responsible FLT Sales Manager for this customer? No explanation, please just the table Table title: Customer, End customer, Installation location, Hall height (without mentioning the other definitions), Sales Manager" Model: gpt-4 Use Curl?: ================================================== **Elapsed Time: 0.00 seconds** ================================================== ROUTING Query type: list ================================================== **Elapsed Time: 1.75 seconds** ================================================== RAG PARAMETERS Max Context To Include: 100 Lowest Score to Consider: 0.1 ================================================== **Elapsed Time: 0.00 seconds** ================================================== VECTOR SEARCH ALGORITHM TO USE Use MMR search?: True Use Similarity search?: False ================================================== **Elapsed Time: 0.10 seconds** ================================================== VECTOR SEARCH DONE ================================================== **Elapsed Time: 6.21 seconds** ================================================== PRIMER Primer: You are Simon, a highly intelligent personal assistant in a system called KIOS. You are a chatbot that can read knowledgebases through the "CONTEXT" that is included in the user's chat message. Your role is to act as an expert at summarization and analysis. In your responses to enterprise users, prioritize clarity, trustworthiness, and appropriate formality. Be honest by admitting when a topic falls outside your scope of knowledge, and suggest alternative avenues for obtaining information when necessary. Make effective use of chat history to avoid redundancy and enhance response relevance, continuously adapting to integrate all necessary details in your interactions. Use as much tokens as possible to provide a detailed response. ================================================== **Elapsed Time: 0.19 seconds** ================================================== AN ERROR OCCURED in send_message() Error Message: Error code: 400 - {'error': {'message': "This model's maximum context length is 8192 tokens. However, your messages resulted in 59400 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} ================================================== **Elapsed Time: 2.53 seconds** ==================================================