Putting AI in your lap(top): A hands-on guide to running LLMs on your Macbook

This presentation will guide you through running a freely available large language model (LLM) locally on your own hardware. You’ll gain practical experience making your own data accessible to the model and explore various techniques to enhance its performance. The session is designed to provide a deeper understanding of LLM capabilities and limitations through hands-on demonstrations.

Attendees will:

  • Gain a clear understanding of Large Language Models and how they work
  • Learn how to set up and run the Llama2 model on your local Macbook
  • Discover how to leverage the LangChain library to interact with and extend the capabilities of LLMs
  • Understand the importance of system prompts and how to craft them to guide the model’s behavior effectively
  • Explore techniques for maintaining context and coherence in conversations with the model
  • Learn how to enhance the model’s responses by integrating relevant information from external sources
  • Discover how to represent and organize your data using embeddings to improve the model’s understanding and performance

Presenter