Getting Started with Hugging Face Transformers: A Practical Guide

In last blog we used Ollama to run a model locally and then wrote a python client to connect and talk to it. In this blog post we will explore what Transformers are, dive into the Hugging Face ecosystem, and build practical examples for text generation, translation, sentiment analysis, and image classification.

See the full article at medium here.

*Want to dive deeper? Check out our complete code examples in the GitHub repository and experiment with different models and tasks. The future of AI is in your hands!*

Building Your Own Local LLM: A Hands-On Journey

Why Build Your Own LLM Setup?

If you’re reading this, you’ve probably used ChatGPT, Claude, or another AI assistant. They’re incredibly powerful, but have you ever paused to think about what happens to your data when you hit “send”? Every query, every piece of code you share, every business idea you brainstorm—it all gets processed on someone else’s servers.

This isn’t just about privacy paranoia. It’s about understanding and controlling the technology that’s rapidly becoming essential to how we work and think.

Learning by Building

I’m a firm believer that the best way to understand something is to build it yourself. Reading documentation is great, watching tutorials helps, but nothing beats getting your hands dirty with actual code. If you’re like me—someone who needs to do to truly understand—then this series is for you.

Over the next few posts, we’ll embark on a journey from zero to a fully functional, private AI assistant running entirely on your local machine. No cloud dependencies, no data leaving your computer, just you and your own personal LLM.

Read More »