分类筛选
找到 1117 个 Demo
Comparativa: GPT-5.4 vs Claude Opus 4.6 vs Gemini...
Comparativa: GPT-5.4 vs Claude Opus 4.6 vs Gemini 2.5 ⬇️ https://t.co/8STW5fUFgB
Macbook M4 Max 128GBを持っているので、Mac Miniは16GBで足りるだろうと...
Macbook M4 Max 128GBを持っているので、Mac Miniは16GBで足りるだろうと思ったんだけど、Qwen 3.5やGemma 4がマジで超知能過ぎるので24GBモデルを買えばよかったと大後悔が確定している。仕方なくQwen 3.5 9Bを走らせてるけど、このサイズでめっちゃ賢い。Gemini 2.5 Flashと言われたら信じるレベル。
🚨 Do you understand what google just did? GOOGLE...
🚨 Do you understand what google just did? GOOGLE JUST DROPPED GEMMA 4. Their most intelligent open model yet. Built on the same research as Gemini 3 but this time you can run it LOCALLY on your own hardware. Advanced reasoning. Agentic workflows. Full commercial use under Apache 2.0. Meaning anyone can build with it. No restrictions. No licensing fees. The open source AI race just got a lot more interesting. The gap between what you can run locally today versus 12 months ago is almost unbelievable. We are at a point where the models sitting on your own machine are more capable than what most companies were paying millions for in 2023. This is what happens when the biggest labs compete in open source. Everybody wins.
Today we're releasing Gemma 4, our new family of o...
Today we're releasing Gemma 4, our new family of open foundation models, built on the same research and technology as our Gemini 3 series. These models set a new standard for open intelligence, offering SOTA reasoning capabilities from edge-scale (2B and 4B w/ vision/audio) up to a 26B parameter MoE model and a 31B dense model. By releasing Gemma 4 under the Apache 2.0 license, we hope to enable more innovation across the research and developer communities. Our earlier Gemma 3 models were downloaded 400M times and over 100,000 variants of those models have been published, so we're excited to see what the community will do with the even better Gemma 4 models! Learn more at https://t.co/BW6O3Gr8bc and https://t.co/8M0XSQSP4u Great work by everyone involved! #Gemma4 #AI #OpenSource #ML
New AI models drop almost every day of the week....
New AI models drop almost every day of the week. But let’s compare Gemini 3.1 with other same-tier models—different DNA. → Gemini 3.1 Flash-Lite 389 t/s, $0.25/M input. Built for high-volume, at-scale workloads. 86.9% GPQA Diamond. 1M context window. → Claude Haiku 4.5 101 t/s, $1/M input. 73.3% SWE-bench. Near-frontier coding at a budget price. Best for Anthropic-ecosystem sub-agents. → GPT-5.3 Instant ~$0.80/M input. 26.8% fewer hallucinations with web search vs. the prior model. Faster, more direct answers. See the full breakdown in the infographics ↓
Here we go: Gamma 4 released: ""Outperforms models...
Here we go: Gamma 4 released: ""Outperforms models 20x its size" Google dropped Gemma 4 under Apache 2.0, full open-source, big licensing shift. Built on Gemini 3 tech, four sizes: E2B, E4B, 26B MoE, 31B Dense. Price-performance: 31B is #3 open model on Arena AI, 26B MoE is #6 — beating models 20x their size. MoE activates only 3.8B params at inference. Fits on consumer GPUs quantized. Edge: E2B/E4B run offline on phones, Raspberry Pi, Jetson Nano. Native vision + audio at 2B params, 128K context. Built with Qualcomm/MediaTek.
Introducing SenseNova-SI-1.4-InternVL3-8B: scaling...
Introducing SenseNova-SI-1.4-InternVL3-8B: scaling spatial intelligence across open multimodal models. 🚀 ⚡ 8M curated samples: SenseNova-SI-8M — rigorous taxonomy of spatial capabilities 🧠 SOTA spatial reasoning: hits 88.8 on MindCube-Tiny, 40.1 on MMSI — beats Gemini-2.5-Pro & GPT-5 🎯 Grounding: 89.21 avg on RefCOCO splits, 78.64 on CountBench 📐 Depth estimation: 95.56 relative depth, 80.31 absolute depth on Ibims 🛠️ Multi-base: InternVL3, Qwen3-VL, Qwen2.5-VL, BAGEL (unified understanding + generation) ✅ Apache 2.0. Drop-in for existing research pipelines. 🤖 Model: https://t.co/bxG2j4a3Qq 📄 Paper: https://t.co/HCtbzPfS5r
🚨 BREAKING: Someone built a coding editor that le...
🚨 BREAKING: Someone built a coding editor that lets you use Claude Opus 4.6, GPT-5.4 and Gemini 3.1 Pro without paying a single dollar in API fees. It's called Glass and it just made Cursor, Windsurf, and every AI IDE look like a rip-off. Here's how it works (in plain English):↓
Gemma 4 is here! Our most intelligent open models...
Gemma 4 is here! Our most intelligent open models to date, are built on the same world-class research and tech as Gemini 3, and are sized to run and fine-tune efficiently on local hardware. Check out what @GoogleGemma 4 brings to devs: 💎 Advanced Reasoning: Deep logic tasks, complex multi-step planning, and beyond 💎 Longer context: Seamlessly analyze entire codebases with context windows of 128K tokens for our edge models and 256K tokens for our largest models 💎 Vision and audio: Rich, multimodal interactions out of the box 💎 140+ languages: Trained on 140+ languages 💎 Apache 2.0 license: industry-standard open-source license
"Open source can't compete with frontier AI" Goog...
"Open source can't compete with frontier AI" Google just killed that narrative. Gemma 4 dropped this morning. Gemini 3's architecture. Fully open. Free forever: - 4 sizes ($35 Pi to workstations) - 256K context - Native multimodal - Built-in agent tools - 140+ languages - Beats models 20x larger Apache 2.0. Build anything. Sell it. No permission. No API costs. Data stays local. The future isn't behind corporate APIs. It's on your hardware. Below: Gemma 4 + your own Claude Code setup 👇
let me explain what Google just did: → they’ve j...
let me explain what Google just did: → they’ve just released their most capable open models yet → Gemma 4… built from the same research behind Gemini 3… four sizes… all running on your own hardware → the 31B dense model and 26B mixture of experts model deliver what Google is calling “frontier-level intelligence” on a personal computer... no cloud required… your data stays on your machine → the 26B MoE only activates 3.8B parameters at a time… meaning it runs fast without needing massive compute → the 2B and 4B models are built for phones and edge devices… text, image, and audio support they can see and hear in real time… 140+ languages natively → 256K context window on the larger models… enough to analyze full codebases or handle long multi-turn agent workflows → native tool use built in… these models can plan steps and call tools on your behalf without extra wiring → Arena Elo scores: Gemma 4 31B hit 1464 and the 26B hit 1453… competing with models 20-30x their size as of today… GLM 5 at 754B scored 1469… Kimi k2.5 at 1100B scored 1464… Gemma is doing this at a fraction of the parameters → Apache 2.0 license… fully open weights, commercially permissive… and the first time Google has done this with Gemma → 400 million downloads and over 100,000 community variants since the first Gemma launched → available now on Google AI Studio, HuggingFace, Kaggle, and Ollama the open source AI race just took a massive leap forward imo running frontier-level reasoning on your laptop without sending a single byte to the cloud completely changes the game for privacy, speed, and cost and the fact that a 26B model with 3.8B active parameters is competing with models 20-30x its size tells you where this is heading running models locally? you gotta get this set up today
Gemma 4 is here! 4⃣Our most capable, agentic open...
Gemma 4 is here! 4⃣Our most capable, agentic open model, built on the same research as Gemini 3. ✨ Reasoning. Multimodal. Four sizes (2B to 31B). Base + Instruct. Released under Apache 2.0. Runs on your phone, laptop, or servers. 🧵↓ https://t.co/AMyJQdsljJ