
Memories.ai raises $8 million in seed funding to advance its Large Visual Memory Model, which enables AI systems to store and retrieve long-term visual experiences. Founded by former Meta Reality Labs researchers, the company targets industries like security, media, marketing, and mobile devices. Its technology allows AI to understand video context across unlimited timeframes, addressing a key gap in current systems.
Meet the Startup Teaching AI to Remember Like a Human
Memories.ai is developing artificial intelligence infrastructure that enables machines to store and recall visual experiences over unlimited timeframes. Unlike conventional AI models optimized for fast processing, this company focuses on building persistent, searchable memory systems specifically for video data. The company’s goal is to introduce a memory layer to AI systems that mimics human recall—connecting, storing, and retrieving video-based knowledge across long spans of time. Its core product is the Large Visual Memory Model (LVMM), which offers a new foundation for context-rich machine understanding in areas like security, entertainment, and consumer devices.
$8 Million and a Mission to Transform AI Memory
Memories.ai secured $8 million in seed funding to expand development and operational capacity. The investment round was led by Susa Ventures, with participation from Samsung Next, Crane Venture Partners, Fusion Fund, Seedcamp, and Creator Ventures.
According to the company, this funding will accelerate progress in integrating visual memory systems into mobile platforms and enterprise applications. The funds will also support further development of its API-based services and chatbot tools that allow users to search and analyze large volumes of video data.
From Meta Labs to Memory Pioneers
The company was founded by Dr. Shawn Shen and Enmin Zhou, who previously worked at Meta Reality Labs in the United Kingdom. Shawn Shen, who moved to the UK at age 14 on a scholarship, completed his PhD after attending Cambridge. His academic research background is paired with Zhou’s experience in product engineering, forming a combination that has enabled the team to build high-complexity systems at a fast pace.
Their shared mission is rooted in solving one of the most technically demanding challenges in artificial intelligence—long-term memory for visual data.
The Tech That Lets AI Remember Decades of Video
The Large Visual Memory Model (LVMM) is designed to manage and recall video content at a scale and persistence level current AI systems cannot achieve. Existing models are limited to short-term video memory, often constrained to around 30 minutes. Memories.ai offers an alternative approach based on four sequential capabilities:
- Compressing video input into high-density memory representations
- Indexing these representations into a structured, searchable format
- Aggregating visual data across multiple sources
- Retrieving relevant memories in real time through natural language interaction
This architecture allows AI systems to analyze decades of footage and recall specific patterns or events. In one use case, the system can identify every instance of a specific basketball move across an athlete’s career. In another, it can surface every brand mention in millions of social media videos.

Recommended: Vogent Voicelab Solves Latency And Consistency Issues In Open-Source Voice Models
Where This Memory Tech Is Already Making Noise
Memories.ai’s platform is being deployed across multiple sectors, each with high-volume video content demands. These include:
Security and Safety
- Search months of surveillance video within seconds
- Detect falls and suspicious behavior in real time
- Match individuals across cameras, even when appearances change
Media and Production
- Locate specific scenes or objects across large archives
- Automate editing and video summarization
Marketing Analytics
- Track brand mentions across platforms like TikTok
- Analyze visual sentiment at scale
Consumer Devices
- Samsung is among the companies exploring memory integration for future mobile platforms
- The system supports tools for video search, text conversion, and content drafting
Memories.ai provides access through both API endpoints and a web-based chatbot where users can upload video files or connect to existing libraries.
What Happens When AI Starts to Remember You
According to co-founder Shawn Shen, memory is the critical missing layer in most AI systems. The goal is to allow machines not just to process images or language, but to build an interconnected network of visual experiences—similar to human memory. This shift enables AI to deliver contextual responses informed by cumulative visual interactions.
Such systems could enable digital assistants that remember every past conversation, robots that adapt through accumulated experience, or wearables that store a lifetime of visual data. By maintaining memory across timeframes and input sources, the technology offers a framework for deeper and more personalized AI interactions.
Why Visual Memory Is the Next Frontier for AI
Memories.ai is developing tools that address a fundamental limitation in current AI architecture: the inability to recall visual context over extended periods. The company’s approach focuses on building long-term video memory into the core of machine intelligence, rather than treating visual input as short-lived or expendable.
By focusing on persistent memory systems rather than simply optimizing for speed or model size, Memories.ai is aiming to change how AI interacts with data-intensive environments like video, where long-term patterns and knowledge are essential.
Please email us your feedback and news tips at hello(at)techcompanynews.com

