urbanists.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
We're a server for people who like bikes, transit, and walkable cities. Let's get to know each other!

Server stats:

570
active users

#gemma3

1 post1 participant0 posts today
PKPs Powerfromspace1<p>Running Gemma 3 locally with no API costs? Yes.<br>Free to experiment? Absolutely.</p><p>We built a full-on comment processing system using Docker Model Runner + <a href="https://mstdn.social/tags/google" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>google</span></a> Gemma 3 — no cloud, no third parties. Just local power.</p><p>Try it out 👉 <a href="https://www.docker.com/blog/run-gemma-3-locally-with-docker-model-runner/" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">docker.com/blog/run-gemma-3-lo</span><span class="invisible">cally-with-docker-model-runner/</span></a></p><p><a href="https://mstdn.social/tags/Docker" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Docker</span></a> <a href="https://mstdn.social/tags/GenAI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>GenAI</span></a> <a href="https://mstdn.social/tags/Gemma3" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Gemma3</span></a> <a href="https://mstdn.social/tags/OpenSource" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>OpenSource</span></a> <a href="https://mstdn.social/tags/LLM" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLM</span></a></p>
michabbb<p><a href="https://social.vivaldi.net/tags/Mistral" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Mistral</span></a> Small 3.1: SOTA Multimodal <a href="https://social.vivaldi.net/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> with 128k Context Window 🚀</p><p><a href="https://social.vivaldi.net/tags/MistralAI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>MistralAI</span></a> releases improved <a href="https://social.vivaldi.net/tags/opensource" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>opensource</span></a> model outperforming <a href="https://social.vivaldi.net/tags/Gemma3" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Gemma3</span></a> and <a href="https://social.vivaldi.net/tags/GPT4oMini" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>GPT4oMini</span></a> with 150 tokens/sec speed. Features <a href="https://social.vivaldi.net/tags/multimodal" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>multimodal</span></a> capabilities under <a href="https://social.vivaldi.net/tags/Apache2" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Apache2</span></a> license.</p><p>🧵👇<a href="https://social.vivaldi.net/tags/machinelearning" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>machinelearning</span></a></p>
Benjamin Carr, Ph.D. 👨🏻‍💻🧬<p>The <a href="https://hachyderm.io/tags/ollama" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ollama</span></a> <a href="https://hachyderm.io/tags/opensource" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>opensource</span></a> <a href="https://hachyderm.io/tags/software" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>software</span></a> that makes it easy to run <a href="https://hachyderm.io/tags/Llama3" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Llama3</span></a>, <a href="https://hachyderm.io/tags/DeepSeekR1" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>DeepSeekR1</span></a>, <a href="https://hachyderm.io/tags/Gemma3" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Gemma3</span></a>, and other large language models (<a href="https://hachyderm.io/tags/LLM" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLM</span></a>) is out with its newest release. The ollama software makes it easy to leverage the llama.cpp back-end for running a variety of LLMs and enjoying convenient integration with other desktop software. <br>The new ollama 0.6.2 Release Features Support For <a href="https://hachyderm.io/tags/AMD" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AMD</span></a> <a href="https://hachyderm.io/tags/StrixHalo" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>StrixHalo</span></a>, a.k.a. <a href="https://hachyderm.io/tags/RyzenAI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>RyzenAI</span></a> Max+ laptop / SFF desktop SoC.<br><a href="https://www.phoronix.com/news/ollama-0.6.2" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="">phoronix.com/news/ollama-0.6.2</span><span class="invisible"></span></a></p>
Karlheinz Agsteiner<p>Not sure if you have noticed it: Google has released Gemma 3, a powerful model that is small enough to run on normal computers.</p><p><a href="https://blog.google/technology/developers/gemma-3/" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">blog.google/technology/develop</span><span class="invisible">ers/gemma-3/</span></a></p><p>I've done some experiments on my Laptop (with a Geforce 3080ti), and am very impressed. I tried to be happy with Llama3, with the Deepseek R1 distills on Llama, with Mistral, but the models that would run on my computer were not in the same league as what you get from ChatGPT or Claude or Deepseek remotely.</p><p>Gemma changes this for me. So far I let it write 3 smaller pieces of Javascript, analyze a few texts, and it performed slow, but flawlessly. So finally I can move to a "use the local LLM for the 90% default case, and go for the big ones only if the local LLM fails".</p><p>This way<br>- I use far less CO2 for my LLM tasks<br>- I am in control of my data, nobody can collect my prompts and later sell my profile to ad customers<br>- I am sure the IP of my prompts stay with me<br>- I have the privacy to ask it whatever I want and no server in the US or CN has those data.</p><p>Interested? If you have a powerful graphiccs card in your PC, it is totally simple:</p><p>1. install LMStudio from LMStudio.ai<br>2. in LMStudio, click Discover, and download the Gema3 27b Q4 model<br>3. Chat</p><p>If your graphics card is too small, you might head for the smaller 12b model, but I can't tell you how well it performs.</p><p><a href="https://hachyderm.io/tags/LMStudio" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LMStudio</span></a> <a href="https://hachyderm.io/tags/gemma3" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>gemma3</span></a> <a href="https://hachyderm.io/tags/gemma" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>gemma</span></a> <a href="https://hachyderm.io/tags/chatgpt" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>chatgpt</span></a> <a href="https://hachyderm.io/tags/llm" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>llm</span></a> <a href="https://hachyderm.io/tags/google" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>google</span></a></p>
Lucas Janin 🇨🇦🇫🇷<p>Testing Open WebUi with Gemma:3 on my proxmox mini PC in a LXC. My hardware is limited, 12th Gen Intel Core i5-12450H so I’m only using the 1b (28 token/s) and 4b (11 token/s) version for now.</p><p>Image description is functioning, but it is slow; it takes 30 seconds to generate this text with the 4b version and 16G allocated for the LXC.</p><p>Next step, trying this on my Mac M1.</p><p><a href="https://mastodon.social/tags/openwebui" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>openwebui</span></a> <a href="https://mastodon.social/tags/gemma3" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>gemma3</span></a> <a href="https://mastodon.social/tags/selfhosted" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>selfhosted</span></a> <a href="https://mastodon.social/tags/selfhost" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>selfhost</span></a> <a href="https://mastodon.social/tags/selfhosting" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>selfhosting</span></a> <a href="https://mastodon.social/tags/alttext" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>alttext</span></a> <a href="https://mastodon.social/tags/ollama" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ollama</span></a> <a href="https://mastodon.social/tags/proxmox" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>proxmox</span></a> <a href="https://mastodon.social/tags/lxc" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>lxc</span></a> <a href="https://mastodon.social/tags/ia" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ia</span></a></p>
Dr. Fortyseven 🥃 █▓▒░<p>Another sort of down side to <a href="https://defcon.social/tags/Gemma3" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Gemma3</span></a>: it kisses your ass SO MUCH. "You were right to question me on that! Good call, sugar tits!"</p><p><a href="https://defcon.social/tags/llm" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>llm</span></a></p>
Mike Stone<p>So, I did it. I hooked up the <a href="https://fosstodon.org/tags/HomeAssistant" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>HomeAssistant</span></a> Voice to my <a href="https://fosstodon.org/tags/Ollama" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Ollama</span></a> instance. As <span class="h-card" translate="no"><a href="https://aus.social/@ianjs" class="u-url mention" rel="nofollow noopener noreferrer" target="_blank">@<span>ianjs</span></a></span> suggested, it's much better at recognizing the intent of my requests. As <span class="h-card" translate="no"><a href="https://fosstodon.org/@chris_hayes" class="u-url mention" rel="nofollow noopener noreferrer" target="_blank">@<span>chris_hayes</span></a></span> suggested, I'm using the new <a href="https://fosstodon.org/tags/Gemma3" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Gemma3</span></a> model. It now knows "How's the weather" and "What's the weather" are the same thing, and I get an answer for both. Responses are a little slower than without the LLM, but honestly it's pretty negligible. It's a very little bit slower again if I use local <a href="https://fosstodon.org/tags/Piper" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Piper</span></a> vs HA's cloud service.</p>
Mike Stone<p>Testing out the newly released <a href="https://fosstodon.org/tags/Gemma3" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Gemma3</span></a> model locally on <a href="https://fosstodon.org/tags/ollama" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ollama</span></a>. This is one of the more frustration aspects of these LLMs. It must be said that LLMs are fine for what they are, and what they are is a glorified autocomplete. They have their uses (just like autocomplete does), but if you try to use them outside of their strengths your results are going to be less than reliable.</p>
🔘 G◍M◍◍T 🔘<p>💡 Gemma 3: modello AI open source ottimizzato per una singola GPU</p><p><a href="https://gomoot.com/gemma-3-modello-ai-open-source-ottimizzato-per-una-singola-gpu/" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">gomoot.com/gemma-3-modello-ai-</span><span class="invisible">open-source-ottimizzato-per-una-singola-gpu/</span></a></p><p><a href="https://mastodon.uno/tags/blog" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>blog</span></a> <a href="https://mastodon.uno/tags/chatbot" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>chatbot</span></a> <a href="https://mastodon.uno/tags/deepseek" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>deepseek</span></a> <a href="https://mastodon.uno/tags/gemini" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>gemini</span></a> <a href="https://mastodon.uno/tags/gemma3" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>gemma3</span></a> <a href="https://mastodon.uno/tags/ia" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ia</span></a> <a href="https://mastodon.uno/tags/iot" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>iot</span></a> <a href="https://mastodon.uno/tags/llm" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>llm</span></a> <a href="https://mastodon.uno/tags/local" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>local</span></a> <a href="https://mastodon.uno/tags/news" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>news</span></a> <a href="https://mastodon.uno/tags/openai" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>openai</span></a> <a href="https://mastodon.uno/tags/picks" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>picks</span></a> <a href="https://mastodon.uno/tags/tech" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>tech</span></a> <a href="https://mastodon.uno/tags/tecnologia" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>tecnologia</span></a></p>
Global Threads<p>🤖 AI<br>🔴 Google Unveils Gemma 3 with 128K Context Window </p><p>🔸 New model offers 4 sizes (1B, 4B, 12B, 27B) with enhanced multimodal reasoning. <br>🔸 Outperforms Llama-405B, DeepSeek-V3 &amp; OpenAI’s o3-mini. <br>🔸 Integrated safety tool ShieldGemma 2 ensures content security. </p><p><a href="https://mastodon.social/tags/Google" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Google</span></a> <a href="https://mastodon.social/tags/Gemma3" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Gemma3</span></a> <a href="https://mastodon.social/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> <a href="https://mastodon.social/tags/Tech" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Tech</span></a></p>
Cloudbooklet<p>Google drops Gemma 3 – a powerhouse AI with multimodal support, function calling, and 128K context window! Smarter, faster, and more efficient. Ready to build with it? 🔥 <a href="https://mastodon.social/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> <a href="https://mastodon.social/tags/Gemma3" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Gemma3</span></a> <a href="https://mastodon.social/tags/google" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>google</span></a> <a href="https://mastodon.social/tags/Tech" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Tech</span></a></p>