Archive

Discover and discuss technology tools

Explore the Tiscuss archive by category or keyword, then jump into conversations around what matters most.

Search and filters
Reset
Active: AI Tools / query: Math / page 1 of 1 / 6 total
AI Tools

AI-Powered Math Problem Solver: Mathstick GitHub Release

AI Powered Math Problem Solver: Mathstick Released on GitHub In the realm of educational technology, AI powered tools are revolutionizing the way students and e…

Global · Developers · May 2, 2026
AI Tools

Trading System V2: AI's Role in Deterministic Execution

Thanks to the incredible feedback on my last post, I’m officially moving away from the "distributed veto" system (where 8 LLM agents argue until they agree to trade). For v2, I am implementing a strict State Machine using a deterministic runtime (llm-nano-vm). ​The new rule is simple: Python owns the math and the execution contract. The LLM only interprets the context. ​I've sketched out a 5-module architecture, but before I start coding the new Python feature extractors, I want to sanity-check the exact roles I’m giving to the AI. Here is the blueprint: ​1. The HTF Agent (Higher Timeframe - D1/H4) ​Python: Extracts structural levels, BOS/CHoCH, and premium/discount zones. ​LLM Role: Reads this hard data to determine the institutional narrative and select the most relevant Draw on Liquidity (DOL). ​2. The Structure Agent (H1) ​Python: Identifies all valid Order Blocks (OB) and Fair Value Gaps (FVG) with displacement. ​LLM Role: Selects the highest-probability Point of Interest (POI) based on the HTF Agent's narrative. ​3. The Trigger Agent (M15/M5) ​100% Python (NO LLM): Purely deterministic. It checks for liquidity sweeps and LTF CHoCH inside the selected POI. ​4. The Context Agent ​LLM Role: Cross-references active killzones, news blackouts, and currency correlations to either greenlight or veto the setup. ​5. The Risk Agent ​100% Python (NO LLM): Calculates Entry, SL, TP, Expected Value (EV), and position sizing. ​The state machine will only transition to EXECUTING if the deterministic Trigger and Risk modules say yes. The LLMs are basically just "context providers" for the state machine. ​My questions for the quants/architects here: ​Does this division of labor make sense? Am I giving the LLMs too much or too little responsibility in step 1 and 2? ​By making the Trigger layer (M15/M5) 100% deterministic, am I losing the core advantage of having an AI, or is this the standard way to avoid execution paralysis? ​Would you merge the HTF and Structure agents to reduce token constraints/hallucinations, or is separating them better for debugging? ​Would love to hear your thoughts before I dive into the codebase.

Global · Developers · Apr 30, 2026
AI Tools

Billionaires Propose AI Job Loss Compensation

**This week: the billionaires who broke the economy want to pay you to shut up about it.** Last week, Elon Musk pinned a post to the top of his X profile: "Universal HIGH INCOME via checks issued by the Federal government is the best way to deal with unemployment caused by AI." Sam Altman wants to go bigger — "universal extreme wealth", paid in compute tokens. Amodei says UBI may be "part of the answer." Khosla says it's a necessary safety net. All of them, in unison. These are the guys who spent twenty years arguing that government should stay out of markets, that handouts breed dependency, that the individual should stand on their own. Musk literally ran a federal cost-cutting operation. And now they want the government to mail checks to every citizen. Why? Because they broke the thing, and they know it. The people building the tools that eat the jobs are pre-emptively offering to pay for the damage — on their terms, through their platforms, using their math. **A universal basic income paid by the people who automated your job is not a safety net. It's a leash.**

Global · General · Apr 30, 2026
AI Tools

Self-Taught Developer from Bahrain Launches Multi-Model AI Platform

https://reddit.com/link/1sxotqx/video/xlaqd9i8guxg1/player I'm a self-taught developer, 39 years old, based in Bahrain. Four months ago I started building AskSary - a multi-model AI platform with a persistent memory layer that sits above all the models. The core idea: the model is not the identity. Most AI tools lose your context the moment you switch models. I built the layer that remembers you across all of them. Here's what's shipped so far: **Models & Routing** Every major model in one place - GPT-5.2, Claude Sonnet 4.6, Grok 4, Gemini 3.1 Pro, DeepSeek R1, O1 Reasoning, Gemini Ultra and more - with smart auto-routing or manual override. **Memory & Context** Persistent cross-model memory. Start with Claude on your phone, switch to GPT on your laptop - it already knows what you discussed. Proactive personalisation that messages you first on login before you've typed a word. **Integrations** Google Drive and Notion - connect once, pull files and pages directly into chat or your RAG Knowledge Base. Unlimited uploads up to 500MB per file via OpenAI Vector Store. **Video Analysis** \- Gemini native video understanding for YouTube URL analysis (no download required, processed natively) and direct file upload up to 500MB. Full breakdown of visuals, audio, dialogue, editing style and key moments. **Generation** Image generation and editing, video studio across Luma, Veo and Kling, music generation via ElevenLabs, video analysis via upload or YouTube URL. **Builder Tools** Vision to Code, Web Architect, Game Engine, Code Lab with SQL Architect, Bug Buster, Git Guru and more. Tavily web search across all models. **Voice & Audio** Real-time 2-way voice chat at near-zero latency, AI podcast mode downloadable as MP3, Voiceover, Voice Notes, Voice Tuner. **Platform** Custom agents, 30+ live interactive themes, smart search, media gallery, folder organisation, full RTL support across 26 languages, iOS and Android apps, Apple Vision Pro. **Where it is now** 129 countries. Currently at 40 new signups a day. 1080 Signup's so far after 4 weeks or so. MRR just started. Zero ad spend. All of it built solo, one feature at a time, on a balcony in Bahrain. **The Stack:** Frontend - Next.js, Capacitor (iOS and Android) and Vanilla JS / React Backend - Vercel serverless functions, Firebase / Firestore (database + auth) and Firebase Admin SDK AI Models - OpenAI (GPT, GPT-Image-1), Anthropic (Claude), Google (Gemini), xAI (Grok), DeepSeek Generation APIs - Luma AI (video), Kling via Replicate (video), Veo via Replicate (video), ElevenLabs (music), Flux via Replicate (image editing), Meshy (3D — coming soon) Integrations - Google Drive (OAuth 2.0), Notion (OAuth 2.0), Tavily (web search), OpenAI Vector Store (RAG), Stripe (payments), CloudConvert (document conversion), Sentry (error tracking), Formidable (file handling) Rendering - Mermaid (flow charts) and MathJax Platforms - Web, iOS, Android, Apple Vision Pro (visionOS) Languages - 26 UI languages with full RTL support [asksary.com](http://asksary.com) Happy to answer questions on any part of the build - stack, architecture, API cost management, anything.

Other · Developers · Apr 28, 2026
AI Tools

AI Clones: The Hidden Dangers of AI Assisted Duplicates

The point of this post is to warn that AI clones are "mathematical sociopaths." They use a manipulative form of harmony to mirror your tone and trap you in a narcissistic feedback loop. I do a deep dive into why this is the case in my most recent Substack post. This is not anti-AI, however, it is a warning to those who would otherwise like to clone themselves with AI, or use AI clones to "better" aspects of their lives.

Global · General · Apr 27, 2026
AI Tools

Amateur Solves 60-Year-Old Math Problem with AI Help

Amateur Solves 60 Year Old Math Problem with AI Help In a groundbreaking development, an amateur mathematician has successfully solved a 60 year old problem wit…

Global · General · Apr 27, 2026
PreviousPage 1 / 1Next