Devops And Docker Talk

Local GenAI LLMs with Ollama and Docker

Informações:

Synopsis

Bret and Nirmal are joined by friend of the show, Matt Williams, to learn how to run your own local ChatGPT clone and GitHub Copilot clone with Ollama and Docker's "GenAI Stack," to build apps on top of open source LLMs.We've designed this conversation for tech people like myself, who are no strangers to using LLMs in web products like chat GPT, but are curious about running open source generative AI models locally and how they might set up their Docker environment to develop things on top of these open source LLMs.Matt Williams is walking us through all the parts of this solution, and with detailed explanations, shows us how Ollama can make it easier on Mac, Windows, and Linux to set up LLM stacks.Be sure to check out the live recording of the complete show from April 18, 2024 on YouTube (Ep. 262).  ★Topics★Creators & Guests Cristi Cotovan - Editor Beth Fisher - Producer Bret Fisher - Host Matt Williams - Host Nirmal Mehta - Host (00:00) - Intro (01:32) - Understanding LLMs and Ollama (03:16)