SEOUL, South Korea, Sept. 25, 2025 /PRNewswire/ -- DESILO Inc. today announced the introduction of its THOR framework, a breakthrough in privacy-safe AI that enables large language model (LLM) inference to run fully under homomorphic encryption. The company's joint research paper with Professor Miran Kim's research team at Hanyang University has been accepted for presentation at ACM CCS 2025, one of the world's premier peer-reviewed venues for computer and communication security research, alongside IEEE S&P and USENIX Security.
The research delivers two key results. First, the team ran a widely used open-source model under homomorphic encryption. They did so without retraining, replacing its computations with homomorphic encryption so inputs and outputs remained encrypted throughout inference.
Second, they achieved near practical runtime. Roughly two sentences of input (128 tokens) are answered on a single GPU at deployment-relevant speed. On core matrix multiplication, performance improved by 5.3x for plaintext to ciphertext operations and 9.7x for ciphertext to ciphertext operations, setting a new state of the art in the team's evaluation setting. To the authors' knowledge, THOR is the first framework to run an entire existing LLM under homomorphic encryption without retraining while still reaching near practical speed.
"This CCS acceptance recognizes a breakthrough in running large language models fully under homomorphic encryption, marking an important academic milestone in privacy-preserving AI research," said Seungmyung Lee, CEO of DESILO. "It also highlights our dual focus: driving forward homomorphic encryption research with partners like Cornami, and accelerating product development to bring trusted Privacy AI into real-world applications."
This innovation forms the technical foundation for upcoming DESILO solutions such as Harvest?, designed to enable secure, privacy-safe analysis across multiple institutions.