EnCharge AI, an early-stage startup, has announced its novel power management chip called EN100. This smart chip has a unique architecture customized to dramatically change the area of artificial intelligence (AI). This cutting-edge chip provides record-breaking performance. All the while, it dramatically decreases energy use, establishing an industry-leading standard.
With its EN100 chip, Jenova promises unrivaled performance per watt ratio. It meets the up to 20x better efficiency than its competitors. This jump in efficiency is especially exciting for applications that need a lot of computing firepower without the added energy expense. Unlike conventional technologies, the EN100 technology measures charge, not the flow of charge. This method greatly reduces the extraneous noise that can confuse machine learning algorithms.
Unique Architecture for Improved Performance
EnCharge AI’s innovative approach incorporates a carefully designed array of capacitors within layers of copper interconnects situated above the silicon in its processors. It is this architecture that is key to realizing the incredible performance potential of the EN100 chip.
Naveen Verma, head of EnCharge AI’s lab at Princeton University, describes the process, called switched capacitor operation. He notes that we’ve already been using this method for the past three decades. In this way, the company has become a master of tapping into the power of analog phenomena, something that had been mostly forgotten in the wake of digital computing. EnCharge AI uses a data-driven approach powered by two key concepts. This technique improves the interpretability of the machine learning and cuts energy usage by nearly 80%.
Verma elaborated on the advantages of this technology, stating that it “means advanced, secure, and personalized AI can run locally, without relying on cloud infrastructure.” This local processing capability sets a new standard for efficiency, security and personalization for end-users.
Funding and Future Collaborations
EnCharge AI recently received a major shot in the arm after raising a $100 million Series B. This eye-popping investment represents serious money from heavyweight supporters such as Samsung Venture and Foxconn. This fresh round of capital will allow the upstart company to improve its technology even more and grow its user base even faster.
Looking to the future, EnCharge AI will be focusing on recruiting a new cohort of early access partners. These collaborations allow developers and researchers to experiment extensively with the capabilities of the EN100 chip. They’ll look at what it can do in a wide array of sectors. With the partnership cultivated through collaboration, EnCharge AI’s goal is to become a leader in the analog AI space.
As Verma noted, “The only thing they depend on is geometry, basically the space between wires.” This revelation underscores the key importance of good design in their tech. It is critical for achieving those performance targets.
The Path Forward for Analog AI
Since first launching in 2017, EnCharge AI has pioneered the field of analog AI technology. The company is always at the cutting edge of what is possible within this space. Today the company produces a stunning processor card. It can run 200 trillion operations per second at only 8.25 watts! This new product is designed to help save battery life on new AI-capable laptops. It attracts a huge amount of interest from the tech-savvy consumers of the present day.
EnCharge AI has its sights set high on that goal. For high-performance AI workstations, they intend to create a 4-chip card that can pump out an absolutely ludicrous 1,000 trillion operations per second. EnCharge AI emerges as a strong contender in the field. As a result, it really bumps heads with other analog AI upstarts like Mythic AI and Sagence.
home to an advanced and ambitious lab focused on smart integrated systems — the brainchild of the company’s chief visionary and autonomous driving pioneer Naveen Verma. It’s like a museum of engineering ingenuity, poster board examples of different efforts to utilize analog ways to make super-efficient AI runs. As Verma remarked, “it turns out, by dumb luck, the main operation we’re doing is matrix multiplies.” This proclamation reaffirms the unpredictable beauty of innovation and the importance of building on previous methodologies to catalyze future breakthroughs.

