Big news! Hengsheng Photonics large model all-in-one machine DeepSeek version is here!

Written by
Jasper Cole
Updated on:July-10th-2025
Recommendation

An innovative work of financial AI, an accelerator for digital transformation with efficient deployment, security and controllability.

Core content:
1. The release background and advantages of the DeepSeek version of the Hengsheng Photonics large model all-in-one machine
2. The all-in-one machine has leading performance and supports high-concurrency and high-throughput financial application scenarios
3. The full-stack open capabilities of the all-in-one machine lower the threshold for the introduction of large models and accelerate AI infrastructure

Yang Fangxian
Founder of 53AI/Most Valuable Expert of Tencent Cloud (TVP)


On March 20-21, Huawei China Partner Conference 2025 was held in Shenzhen.
At the meeting, Hang Seng officially released the DeepSeek version of the Photon Large Model All-in-One Machine . Based on the Ascend 800I A2 inference server , the all-in-one machine provides financial institutions with an "out-of-the-box" full-stack financial AI engine , meeting the industry's needs for efficient AI application deployment, a secure and controllable computing power base, and digital innovation of business, helping financial institutions to quickly complete the deployment of large models and accelerate the industry's digital transformation process.

At the beginning of 2025, DeepSeek, with its application advantages of "low cost + high performance + high openness", provided financial institutions with a smarter, more efficient and lower-cost basic large model foundation.
However, in the actual process of connecting to DeepSeek, there are still many challenges: difficulty in selecting computing power, long deployment delivery cycle, data security and privacy risks, lack of standardized knowledge enhancement solutions, and the need for continuous optimization of application effects.
To address the above pain points , Hang Seng has teamed up with Huawei Ascend to create the full-stack domestically produced photon large model all-in-one machine DeepSeek, which provides full-stack open capabilities from the underlying computing resource pool, model service platform, enterprise-level knowledge base, intelligent agent orchestration ecosystem components to system delivery and operation and maintenance , covering the entire process of model debugging, deployment and operation, effectively lowering the threshold for introducing large models and helping financial institutions achieve cost-effective AI infrastructure.
The DeepSeek version of Hengsheng Photonics large model all-in-one machine supports the deployment of DeepSeek R1/V3 full-blood version. The 671B full-blood version of DeepSeek can be deployed on two 16-card Ascend inference servers. It achieves high concurrency and high throughput performance based on the domestic open source inference engine MindIE. In the most typical 4096tokens input and 1024tokens output scenario in the financial industry, two 16-card Ascend inference servers can support more than 100 concurrency channels while meeting the single-channel output 10tokens/s experience , while meeting the single-channel latency and high concurrency requirements, and the performance leads the industry benchmark level.
Hengsheng Photonics' large model all-in-one machine has built-in 100+ general models (such as DeepSeek distilled version, Qwen, Llama, GPT, GLM, image model, audio model, video model, etc.) , supports AI applications to respond to diverse requests, and implements multi-model management and intelligent scheduling based on the large model MaaS platform, which can meet offline task scenarios such as feature extraction and content review, as well as specific business generation scenarios such as long text understanding output, multi-round dialogue, and complex data . Customers can choose different model bases based on "scenario + experience + cost".

The all-in-one hardware and software private deployment model has advantages such as one-stop delivery, cost optimization, and compliance risk control:
  • AI empowerment: One-stop delivery of AI full-stack suites shortens the delivery cycle by 40% . It also provides services such as knowledge operations, large model training and fine-tuning, and AI application development support to help customers quickly implement AI applications and accelerate business innovation.
  • Cost optimization: Through software and hardware collaborative optimization and intelligent resource scheduling, it reduces computing power redundancy, shortens deployment cycles , and helps financial institutions achieve efficient transformation to asset-light investment;
  • Risk control: Privatized deployment ensures that institutional data is processed on local servers throughout the process, safeguarding data sovereignty, with built-in knowledge security compliance control and comprehensive knowledge lifecycle management to avoid the risk of sensitive information leakage and promote safe and effective accumulation and sharing of knowledge.

It is worth mentioning that the AI ​​agent orchestration service provided by the all-in-one machine provides financial institutions with customized development support services for general-task AI agents, empowering a wide range of user groups and promoting "AI equality".
In addition, data quality is crucial for big models. Through its self-developed iKnow knowledge platform, Hang Seng provides knowledge storage support services to help financial institutions build exclusive knowledge bases based on their core business data, so that big models can "better understand enterprises and businesses", release the value of business data, and improve the quality of AI services.
The full-stack AI capabilities of the Hang Seng Photonics large model all-in-one machine have truly opened up the "last mile" of AI applications , and can help financial institutions quickly develop and launch these intelligent applications, achieving the construction goals of cost reduction, efficiency improvement and value creation.


At present, Hang Seng has launched intelligent applications such as investment advisory intelligent assistant, investment advisory content generation, pre-job simulation training, intelligent investment research assistant, custody operation assistant, internal and external intelligent review assistant, intelligent data statistics generation, automatic code generation, AI testing, etc. for core financial business scenarios.
At present, large model technology is gradually entering the stage of industrial application. The software and hardware collaborative solutions represented by large model all-in-one machines have become the key infrastructure to promote the intelligent upgrade of the industry, helping large model applications enter the inclusive stage. In the future, Hang Seng will continue to work with outstanding domestic computing power manufacturers and large model manufacturers to build an AI base with excellent performance, security and controllability, and jointly build a safe, efficient and inclusive new financial digital ecology, injecting the most cutting-edge technological capabilities into the financial industry.