How to run LLMs locally

How to run LLMs locally
Hosted by Dr. Alvaro Cintas

Your current plan does not include access to this workshop.
Timestamps:
[00:00] Introduction and housekeeping before starting the session.
[01:53] Recap of major GPT model releases and the rise of closed-source LLMs.
[03:58] The emergence of open-source models following Meta’s leak in 2023.
[05:23] Comparison between closed-source and open-source models.
[07:00] Key differences: transparency, customization, cost, and collaboration.
[09:24] Downsides of open-source: security, quality control, and lack of central oversight.
[10:18] Brief on leading open-source models like LLaMA 2, Mistral, and Falcon.
[11:00] Introduction to leaderboards comparing model benchmarks and licensing.
[12:53] Five compelling reasons to run LLMs locally (privacy, offline use, cost, control).
[15:39]Overview of options to run...
Your current plan does not include access to the rest of this workshop.
Comments (0)
You need an active subscription to comment.
