What Has Changed (And Why You Should Care) If you’ve been using chatbots like ChatGPT, Claude, or Gemini for basic questions and answers, you’re only scratching the surface of what…
Sub-$200 Lidar coming soon, aiming at under $100 eventually
https://spectrum.ieee.org/solid-state-lidar-microvision-adashttps://fortune.com/2026/02/18/tesla-robotaxi-safety-concerns-crashes-elon-musk-big-bet/ Automobile grade Lidar price dropping fast, aiming at sub-$200 range soon. Will Elon Musk admit that he was wrong and start having Lidar in all cars and robots? Or…
Opus 4.6 Wrote a C Compiler. Now What?
https://www.anthropic.com/engineering/buildiing-c-compiiler Why this AI breakthrough is a massive productivity booster for devs, not a replacement. If you’ve been online this week, you’ve probably seen the headlines: Anthropic’s new Opus 4.6…
From Intuition to Intelligence: How AI Is Redefining Expertise
“In the old days, many decisions were simply based on some mixture of experience and intuition. Experts were ordained because of their decades of individual trial-and-error experience.” — Ian Ayres, Super…
The $175 Billion Moat: Why AI Startups Can’t Out-Spend Big Tech
https://www.cnbc.com/2026/02/04/alphabet-resets-the-bar-for-ai-infrastructure-spending.html Google is planning to spend between $175 billion and $185 billion in capital expenditures in 2026, with the vast majority going toward AI infrastructure. For context, Meta expects to…
Who will buy Salesforce? AI or TradTech?
https://www.thestreet.com/investing/stocks/salesforce-stock-gets-a-brutal-reality-check-amid-software-slump The company that mocked Oracle is now worth less than most leading AI startups. Who’s laughing now? A few years ago, Salesforce was the cloud darling that mocked Oracle…
How to Securely Access Ollama and Swama Remotely on macOS with Caddy
Run two local AI runtimes behind a single secure reverse proxy with separate authentication Running Ollama or Swama locally is straightforward: start the server and connect via localhost. But if…
Swama vs Ollama: Why Apple Silicon Macs Deserve a Faster Local AI Runtime
The reason is simple: because you can — and even faster now. If you have an Apple Silicon Mac (M1 or later) with 16GB of RAM or more, you can run powerful LLMs…
Why Google Needed This Deal More Than Apple
Google needed this deal far more than Apple. By embedding Gemini into 1.5 billion iPhones, Google secured its relevance as traditional search faces obsolescence in the AI era.
From Ollama to MLX: Achieving 2-3x Performance on Apple Silicon
Unlock 2-3x faster AI on Apple Silicon! This post explores optimizing models with Ollama and MLX, boosting performance for demanding applications.
Exo is back after nearly 10 months, run your own DeepSeek v3.1 671B with RDMA at 32.5 t/s
https://exolabs.nethttps://github.com/exo-explore/exohttps://www.jeffgeerling.com/blog/2025/15-tb-vram-on-mac-studio-rdma-over-thunderbolt-5 If you have about 40-50k USD for 4 Mac Studio with 512GB RAM each, you can run your full DeekSeek at 32.5 t/s. Probably the cheapest way to do…
How quickly can OpenAI fall?
https://www.the-independent.com/tech/google-gemini-vs-chatgpt-cloudflare-ai-b2881240.htmlhttps://x.com/Similarweb/status/1942838749588775302 Gemini is growing fast. Back to the “Just Google It” days soon. Now waiting for Meta to find a way to make AI relevant or useful in social. Same…