Home
Leveraging advanced software technology, our AI computing broker optimizes GPU utilization for AI processing. This results in a substantial reduction in the number of GPU servers needed, leading to dramatic improvements in power consumption and overall AI development and operational costs.
Key Features and Differentiation:

Drastically reduces the number of GPU servers required for AI processing, compressing floor space and power consumption. Furthermore, enabling the execution of jobs exceeding conventional GPU memory limits, dramatically improving overall AI development and operational costs
Possession of high technological capabilities in performance analysis, HPC, and AI processing is a key differentiator
Possession of high technological capabilities in performance analysis, HPC, and AI processing is a key differentiator
Key Business Benefits:

AI developers and SaaS providers: Contributing to cost reduction and enhanced competitiveness in development and operation
AI service providers: Enabling more service deployments with limited resources, opening up new business opportunities
AI service providers: Enabling more service deployments with limited resources, opening up new business opportunities
Technical Overview:
Target Industry/Users:
- Developers and providers of AI applications and services, data center operators
Challenges in Target Industry and Operations:
- Soaring prices and shortages of GPUs, essential for AI processing
Technical Challenges:
- Reducing the number of GPUs required by efficiently sharing them among multiple AI applications
Value Provided by AI computing broker
- The following benefits are obtained simply by installing it without modifying programs:
- Improved AI processing throughput with existing GPU resources
- Reduced capital expenditure on equipment (including cloud instance fees) due to reduced number of GPUs required
- Ability to deploy applications larger than GPU memory, allowing many AI applications to be accommodated with fewer GPUs
Fujitsu's Technological Advantage:
- Technology that delves into the processing content of AI programs and allocates GPUs only to GPU-utilizing processes
- Performance analysis technology and software technology to extract the performance of devices such as GPUs, cultivated through the development of world-leading HPC. technology
Use Cases:
- End Users:
- Installing multiple AI applications on a single inference server to provide diverse services
- App Developers:
- Improving development throughput by enabling simultaneous execution of multiple learning processes during model parameter exploration
Case studies:
- Achieved approximately double the throughput by incorporating it into the model development process of TRADOM
- Achieved the same throughput with half the number of GPUs by applying it to AlphaFold2 (protein 3D structure prediction)
Technical Trial:
- Proof of concept (PoC) is possible.