Chromia has entered a technical partnership with Mantle-based Chasm Network to improve the transparency and accountability of artificial intelligence (AI) systems by leveraging blockchain for data management. This collaboration, announced in a press release on October 21, aims to utilize Chromia’s decentralized platform as a database layer to store AI inference data.
Under the partnership, Chromia will support a decentralized application (dApp) already deployed on its mainnet, which will generate transparent and immutable records. These records are designed to ensure the verifiability of the data sources used in AI decision-making processes, improving the trustworthiness of AI outputs.
Chasm Network, which operates on the Mantle blockchain, also plans to launch its native token, Chasm AI, on October 24, though its exact role in the partnership has yet to be clarified. Yeou Jie, Chromia’s head of business development, emphasized that the collaboration will enhance secure and efficient data management, particularly for complex use cases like decentralized AI.
The Intersection of Blockchain and AI
As the convergence of blockchain and AI technologies continues to gain traction, major tech companies are looking to integrate blockchain solutions into their AI-driven systems. Samsung, for example, recently unveiled plans to expand its use of blockchain to strengthen security for its AI-powered home appliances.
In a blog post, Samsung revealed that it would extend its existing Knox Matrix framework, which is currently used on mobile devices and televisions, to a wider array of smart home products. This system uses a private blockchain to establish a “Trust Chain” that enables interconnected devices to monitor each other for potential security threats and alert users when issues are detected. This initiative signals growing interest in leveraging blockchain’s transparency and security features to enhance the safety of AI-driven technologies.
By harmonizing blockchain with AI, both Chromia and Samsung are contributing to a broader trend that seeks to bring greater accountability and security to AI systems, ensuring that users can trust the underlying data and processes that drive these technologies.