Memories.ai Pioneers Visual Memory for Wearables and Robotics

Memories.ai, a new startup founded in 2024, is at the forefront in developing a visual memory layer. This cutting-edge technology has the potential to change the landscape of wearables and robotics. Shawn Shen started the company after developing the AI system embedded in Meta’s Ray-Ban glasses. Now, with $16 million in secured funding, they’re poised…

Lisa Wong Avatar

By

Memories.ai Pioneers Visual Memory for Wearables and Robotics

Memories.ai, a new startup founded in 2024, is at the forefront in developing a visual memory layer. This cutting-edge technology has the potential to change the landscape of wearables and robotics. Shawn Shen started the company after developing the AI system embedded in Meta’s Ray-Ban glasses. Now, with $16 million in secured funding, they’re poised to make that vision a reality. This investment consists of an $8 million seed round that was completed in July 2025 alongside the company’s initial deliveries and an $8 million extension.

She sees big potential in the wearables and robotics market. He foresees a radical change that will make these technologies perform much better in our real-world environment. And most importantly, he argues that artificial intelligence needs to remember – and be able to recall – everything it observes. This new skillset is key to designing for success offline as well.

“In terms of commercialization, we are more focused on the model and the infrastructure, because ultimately we think the wearables and robotics market will come, but it’s probably just not now,” Shen stated. And Shen imagines that the future of AI will be in making it invisible, an integrated part of items like clothing and accessories. This strategic direction deepens those objects’ abilities via memory.

To support the infrastructure needed for this, Memories.ai powers the experience with Nvidia’s AI tools in order to develop a highly sophisticated platform for visual memory. In July 2025, the firm announced its first large visual memory model (LVMM). This was a huge sign of progress toward the city’s lofty goal. The LVMM gives devices the tools to better encode and decode visual memories. This breakthrough serves as a dramatic acceleration towards enabling AI to connect and operate in the 3-dimensional world.

The company doesn’t just stop at being a software vendor, as it has brought its own devices to the table to deliver tailored expectations. Shehine added that current off-the-shelf video recorders did not cut it, especially in terms of high-definition quality and high battery drain. This caused Memories.ai to create innovations of our own solutions instead of being satisfied with unacceptable solutions.

Later this year, the second generation of LVMM will be getting off the ground. It will feature an interesting collaboration with Qualcomm, using its processors to boost the device’s performance. Shen likens this technology to a pint-sized version of Gemini Embedding 2, which prioritizes multimodal indexing and retrieval. Ultimately, the new technology is intended to improve the range of applications for soft, wearable, and robotic technologies.

Shen is definitely focused on making Memories.ai a leader in AI memory, and not a hardware player. He stated, “AI is already doing really well in the digital world, what about the physical world?” This question underlines the fundamental mission of Memories.ai: bridging the gap between digital intelligence and tangible experiences.

The company has been changing so fast. It hopes to be a central player in determining the direction of the future of wearables and robotics. By focusing on memory integration, it seeks to transform how these technologies function and interact with users, ultimately enhancing everyday life.