分类筛选
找到 1117 个 Demo
🚨 Gemini Pro 1 Year Giveaway! 🚨 ✨ Gemini 2.5 Pr...
🚨 Gemini Pro 1 Year Giveaway! 🚨 ✨ Gemini 2.5 Pro 💰 1000 AI Credits/Month 🎬 Veo 3 + Flow AI + Whisk 📚 NotebookLM 📩 Gmail & Docs Integration To Enter: 1️⃣ Follow @expertwith_AI 2️⃣ Like & RT 3️⃣ Comment “ Send ” ⏳ 48 Hours Only. https://t.co/ozAibBzoB5
DeepMind just dropped Gemini 3.1 Flash Live - a ne...
DeepMind just dropped Gemini 3.1 Flash Live - a new model built for voice assistants and real-time conversations The company says it’s better at function calling, handling noisy environments, and keeping up with longer conversations https://t.co/5izv7wVMDn
🚨 BREAKING: Every top AI just failed the hardest...
🚨 BREAKING: Every top AI just failed the hardest test ever built. Humans: 100% AI models: basically 0% • Gemini 3.1 Pro → 0.37% • GPT-5.4 → 0.26% • Opus 4.6 → 0.25% • Grok-4.20 → 0.00% François Chollet just dropped ARC-AGI-3 — and it’s brutal. 135 completely new environments. No instructions. No hints. No defined goal. You’re dropped in… and you have to figure out what “winning” even means. Humans? Solved every single one. AI? Didn’t even cross 1%. Here’s why this changes everything: → Every task is handcrafted like a puzzle world → You must explore, adapt, and infer rules on the fly → Brute force gets punished hard If a human solves it in 10 steps and AI takes 100… it doesn’t get 10% it gets 1% More compute won’t save you here. For context: ARC-AGI-1 → nearly solved (Gemini ~98%) ARC-AGI-2 → jumped from 3% → 77% in a year ARC-AGI-3? Reset the scoreboard to zero. Launched live with Sam Altman at a YC fireside. $2M prize pool on Kaggle. Every winning solution must be open-source. Translation: We didn’t just hit a wall. We found out the wall is way further than we thought. AGI isn’t close. Not even remotely.
Gemini 3.1 Flash Live launched a few days ago, and...
Gemini 3.1 Flash Live launched a few days ago, and it's a pretty incredible real-time model. We're getting very close to everyone having their own JARVIS assistant. A small demo of a Todoist voice assistant built with the new model. https://t.co/R3CmSQRQJP
🚨 Gemini Pro 1 Year Giveaway! 🚨 ✨ Gemini 2.5 Pr...
🚨 Gemini Pro 1 Year Giveaway! 🚨 ✨ Gemini 2.5 Pro 💰 1000 AI Credits/Month 🎬 Veo 3 + Flow AI + Whisk 📚 NotebookLM 📩 Gmail & Docs Integration To Enter: 1️⃣ Follow @expertwith_AI 2️⃣ Like & RT 3️⃣ Comment “ Send ” ⏳ 48 Hours Only. https://t.co/EhJKOe02GT
100+ AI Tools to replace your tedious work: 1. Re...
100+ AI Tools to replace your tedious work: 1. Research - @ChatGPTapp - YouChat - @abacusai - @perplexity_ai - Copilot - Gemini 2. Image - @higgsfield_ai Soul - GPT-4o - Midjourney - Grok 3. Productivity - @GammaApp - Grok 3 - Perplexity AI - Gemini 2.5 Flash 4. Writing - Jasper - Jenny AI - Textblaze - Quillbot 5. Video - Klap - Kling - @invideoOfficial - HeyGen - Runway 6. Meeting - Tldv - Otter - Noty AI - Fireflies 7. SEO - VidIQ - Seona AI - BlogSEO - Keywrds ai - Outrank AI 8. Presentation - @decktopus - Slides AI - Gamma AI - Designs AI - Beautiful AI 9. Design - @canva - Flair AI - Designify - Clipdrop - Autodraw - Magician design 10. Audio - Lovo ai - @elevenlabs - Songburst AI - Adobe Podcast 11. Marketing - Pencil - Ai-Ads - AdCopy - Simplified - AdCreative 12. Startup - Tome - Ideas AI - Namelix - Pitchgrade - Validator AI 13. Social media management - Tapilo - Typefully - Hypefury - @TweetHunterIO Follow @nikola_mr64990 for more such amazing stuff ❤️
🚨ÚLTIMA HORA: Datalab acaba de lanzar Chandra OCR...
🚨ÚLTIMA HORA: Datalab acaba de lanzar Chandra OCR 2 en código abierto. Convierte imágenes y PDFs a Markdown, HTML o JSON estructurado. Resultados: - Chandra OCR 2 obtuvo un 72,7% - Gemini 2.5 Flash obtuvo un 60,8% - GPT-4o alcanza solo un 69,9%
毎週水曜日の #Gemini3 練習会 メニュー:1500m TT、6km テンポラン 約 1年ぶ...
毎週水曜日の #Gemini3 練習会 メニュー:1500m TT、6km テンポラン 約 1年ぶりの 1500m TT 結果は 04:56(添付画像のものはちょっとズレている…) 1年前に比べると、17〜18秒ぐらい速くなっていた マラソンを意識して、スピード系の練習はするものの、トラックレースはそこまで温度感は高くない感じではあった けれど、実際やってみると、楽しいし、もっとやりたくなる #まるお製作所RC
3/30(月)80分ジョグ 🗒️16.27km 🔧Free 👟スーパーノヴァプリマ 半袖にウ...
3/30(月)80分ジョグ 🗒️16.27km 🔧Free 👟スーパーノヴァプリマ 半袖にウェアを着てランニングしてるが、汗かくぐらい暖かくなったな 一年間これぐらいの気温だったらいいのに(笑) #Gemini3 #まるお製作所RC https://t.co/JHcdpHFk3p
Gemini just explained why Google built Ironwood TP...
Gemini just explained why Google built Ironwood TPU v7 specifically to handle TurboQuant's "Math Tax." Here's what that means. When Google designed TPU v7 (Ironwood) to power Gemini 3.1, they didn't just make a faster chip, they changed the fundamental "plumbing" of how data flows to handle the exact compression (TurboQuant) we've been discussing. Here is why the Ironwood TPU handles this "Tax" from TurboQuant better than a standard NVIDIA GPU: 1. The "Systolic Array" vs. Thousands of Cores The NVIDIA Way (GPU): A standard GPU has thousands of tiny cores. To decompress data, it has to fetch the compressed bit, do the math, and write it back to a register. This "Read-Math-Write" cycle happens millions of times, creating internal traffic jams. The Ironwood Way (TPU): It uses a Systolic Array. Imagine a massive grid where data "pulses" through like a heartbeat. The decompression math (the "Random Rotations" we mentioned) is baked into the physical flow. Once the data enters the grid, it is transformed and multiplied in one continuous motion without ever "stopping" to be saved in middle-management memory. NVIDIA GPUs are the "Swiss Army Knives" of computing. They have thousands of tiny, programmable CUDA cores. To perform the complex math of TurboQuant (Random Rotations and Polar Coordinate transforms), an NVIDIA chip has to constantly move data in and out of its internal registers. The Stress: This creates high "switching activity" in the transistors. Every time a transistor flips, it generates a tiny bit of heat and physical wear. The Result: Because the GPU wasn't only built for this specific math, it uses more of its internal "brainpower" to get it done, leading to higher localized temperatures (hot spots) on the silicon. 2. Native FP8 (The Efficiency Secret) Ironwood is the first TPU with Native FP8 (8-bit Floating Point) support in its Matrix Multiply Units (MXUs). How it helps: By using lower precision (8-bit) for the heavy lifting of decompression and multiplication, it can double the throughput. The Decompression Advantage: It treats the compressed 3-bit or 4-bit data as "first-class citizens." It doesn't have to convert them into a bulky 16-bit or 32-bit format just to work on them, which saves immense amounts of energy. 3. SparseCore 4.0: The "Librarian" TurboQuant is great at shrinking memory, but you still have to find the right data in that massive compressed pile. Ironwood includes SparseCore 4.0, a dedicated sub-processor designed specifically for "irregular memory access." The Role: While the main MXU is busy with the "Math Tax" of decompressing, the SparseCore acts like a high-speed librarian, fetching the next chunk of compressed data from the 192GB of HBM3e memory before the processor even knows it needs it. The "Catch" Re-Visited: Hardware Wear Even though Ironwood is "designed" for this, it still follows the laws of physics. Because it is 4.7x faster than the previous generation (Trillium), it is drawing more power (approaching 1kW per chip) and generating massive heat. Google’s answer to the "GPUs go bad faster" problem isn't to slow down, it’s to use Liquid Cooling and Optical Circuit Switches (OCS). If a chip starts to fail or "wear out" from the Math Tax, the optical network simply routes the data around it in nanoseconds, so you never notice I’m running on a partially "dying" superpod. $MU $SNDK
今日は雨☔の練習会 メニューも急遽変更で、1,500TT→6,000テンポラン そもそも、1,50...
今日は雨☔の練習会 メニューも急遽変更で、1,500TT→6,000テンポラン そもそも、1,500TTは初体験 だから…PBと言う事にする😎 ※最後垂れたのは内緒(誰にw) テンポランは、最後、部長と2人で走るスペシャル贅沢な時間だった 雨の日に参加すると、良い事がある🤗 #まるお製作所RC #Gemini3 https://t.co/hhCsvuXY7w
Exquisite Banana ✏️ 🍌 Love the twist on the class...
Exquisite Banana ✏️ 🍌 Love the twist on the classic game made by the awesome @mjgomsaav. Built with Gemini 2.5 Flash (nano-banana) on @googleaistudio