
Member-only story
Getting Started with Llama 3.2
It is finally here? Kind of…
I am kind of excited about Llama 3.2 the new model released by Meta.
Why kind of you might ask? Because it’s not really available in Europe, in the sense that we can’t use it for anything besides personal projects. However, that’s fine with me! I only want to use it to make fun content about how to use AI for different random projects anyway! :)
In this article I wanna chat about the new model by Meta: Llama 3.2
The Good, The Bad, and the Kind of Whatever
So, another month, another model, its like these models come out every second now, being spit out by these giant corporations so we can have fun downloading and running them locally on the hopes of living an off grid life somewhere, using them to investigate the big questions of the universe in peaceful solitude:

But it is here indeed, Llama 3.2, the powerful, multimodal beast to resolve all of our problems, like:
How can I waste an entire afternoon trying to find a useful thing to do with this model?
Nevertheless, here is a summary of the good stuff (which I definitely did not generate with ChatGPT 01-preview, I swear, I don’t know what you’re talking about!):
Meta has released Llama 3.2, introducing:
- New small (1B & 3B parameters) and medium-sized vision language models (11B and 90B).
- Lightweight text-only models with 1B and 3B parameters suitable for edge and mobile devices.
- Features of the 1B and 3B Models:
- Support a context length of 128K tokens.
- Optimized for on-device applications like summarisation and. instruction following. - Features of the 11B and 90B Vision Models:
- Serve as drop-in replacements for their text-only counterparts.
- Outperform some closed models like Claude 3 Haiku in image understanding tasks. (that’s something)
- Customizable and fine-tunable using tools like torchtune. (sure like I…