GPU Optimised Workstations

Optimised for GPU, our range of workstations are capable of blistering fast performance.

  • Blazing fast GPU performance
  • Enterprise-Grade Components
  • GPU / Supercomputing Technical Sales Team
  • 3 Years Warranty Included with Extended Warranty Options Available


Core-X TeslaStation Pro

Intel X299, Intel Core-X Extreme Processors, supports 3-Way SLI and CrossFireX and Telsa, 8 x DIMM, Max. 128GB Quad Channel Memory, Dual Intel Gigabit LAN, 8-Channel High Definition Audio CODEC featuring Crystal Sound 2.

Features:
Ultra Quiet System Design, Multi-Display Options, GPU Compatible
Drive Bays:
4 Fixed Drives
Expansion Slots:
7 x PCIe 3.0/2.0 x16
Workstation Size:
Full Tower
Processor:
Intel Core X i5,i7,i9
Max RAM Capacity:
256GB
Configure From: €2,100
Configure
TeslaStation XL Single Xeon Gen4

Single 4th Gen Intel Xeon Scalable Gen4 Processor, GPU Computing Pedestal Supercomputer, 4x Tesla, RTX GPU Cards

Features:
Ultra Quiet System Design, GPU Compatible, Up to 4x Graphics Card
Drive Bays:
8 Hot-Swap Drives
Expansion Slots:
4x PCIe 5.0 x16, 1x PCIe 5.0 x8
Workstation Size:
Full Tower
Processor:
Intel Xeon Scalable Processor
Max RAM Capacity:
512GB
Configure From: €3,646
Configure
TeslaStation XL Single EPYC Gen4

Single AMD 9004 Series Gen 4 Processor, GPU Computing Pedestal Supercomputer, 4x Tesla, RTX GPU Cards

Features:
Ultra Quiet System Design, GPU Compatible, Up to 3x Graphics Card
Drive Bays:
8 Hot-Swap Drives
Expansion Slots:
4x PCIe 5.0 x16, 1x PCIe 5.0 x8
Workstation Size:
Full Tower
Processor:
AMD EPYC
Max RAM Capacity:
512GB
Configure From: €4,489
Configure
Ultra High-Performance Rackmount or Tower 
TeslaStation Pro-XL Xeon Gen2

Dual Scalable Xeon Processor, GPU Computing Pedestal Supercomputer, 4x Tesla, Xeon Phi or GTX-Titan GPU Cards

Features:
Ultra Quiet System Design, GPU Compatible
Drive Bays:
8 Hot-Swap Drives
Expansion Slots:
4 x PCIe 3.0/2.0 x16, 2 x PCIe 3.0/2.0 x8, 1 x PCIe 3.0/2.0 x8(x4 Speed)
Workstation Size:
Full Tower
Processor:
Intel Xeon Scalable Processor
Max RAM Capacity:
2TB
Configure From: €4,910
Configure
Ultra High-Performance 
AMD Threadripper Pro

Up to 64 Cores, supports 4-Way SLI and CrossFireX, Upto 256GB DDR4 4400 (OC) Memory, Dual 10GbE LAN, Intel Wifi, Bluetooth, 8-Channel High Definition Audio CODEC,

Features:
Ultra Quiet System Design, Multi-Display Options, GPU Compatible
Drive Bays:
4 Fixed Drives
Expansion Slots:
7 x PCIe 3.0/2.0 x16
Workstation Size:
Full Tower
Processor:
AMD Ryzen
Max RAM Capacity:
256GB
Configure From: €5,276
Configure
TeslaStation Pro-XL Xeon Gen4

Dual 4th Gen Intel Xeon Scalable Gen4 Processor, GPU Computing Pedestal Supercomputer, 4x Tesla, RTX GPU Cards

Features:
Ultra Quiet System Design, GPU Compatible, Up to 4x Graphics Card
Drive Bays:
8 Hot-Swap Drives
Expansion Slots:
7x PCIe 5.0 x16
Workstation Size:
Full Tower
Processor:
Intel Xeon Scalable Processor
Max RAM Capacity:
1TB
Configure From: €6,520
Configure
Ultra High-Performance Rackmount or Tower 3.5" Drives NVMe Drives 10Gb Lan 
TeslaStation Pro-XL Xeon Gen3

Dual Scalable Xeon Gen3 Processor, GPU Computing Pedestal Supercomputer, 4x Tesla, RTX GPU Cards

Features:
Ultra Quiet System Design, GPU Compatible
Drive Bays:
8 Hot-Swap Drives
Expansion Slots:
4 x PCIe 3.0/2.0 x16, 2 x PCIe 3.0/2.0 x8, 1 x PCIe 3.0/2.0 x8(x4 Speed)
Workstation Size:
Full Tower
Processor:
Intel Xeon Scalable Processor
Max RAM Capacity:
2TB
Configure From: €6,564
Configure

Call a Broadberry Storage & Server Specialist Now: +49 89 1208 5600

Have a Broadberry Expert Contact You:

NVIDIA educational and research discount

NVIDIA P40
NVIDIA P100 PCIe
NVIDIA V100S
NVIDIA Titan RTX
NVIDIA T4
NVIDIA A100
Architecture Pascal Pascal Volta Turing Turing Ampere
SMs 30 56 80 72 72 108
CUDA Cores 3,840 3,584 5,120 4608 2,560 6,912
Tensor Cores N/A N/A 640 576 320 432
Frequency 1,303 MHz 1,126 MHz 1,267 MHz 1,350 MHz 1,590 MHz -
TFLOPs (double) 367.4 GFLOPS(1:32) 4.7 8.2 - 65 9.7
TFLOPs (single) 12 9.3 16.4 16.3 8.1 19.5
TFLOPs (half/Tensor) 183.7 GFLOPS(1:64) 18.7 130 130 65.13 TFLOPS(8:1) 624
Cache 3 MB L2 4 MB L2 6 MB - 4 MB 40 MB
Max. Memory 24 GB 16 GB 32 GB 24 GB 16 GB 40 GB
Memory B/W 346 GB/s 720 GB/s 1134 GB/s 672 GB/s 350 GB/s 1,555 GB/s

NVIDIA Tesla P40
NVIDIA Tesla P100
NVIDIA Titan RTX
NVIDIA Tesla T4

The NVIDIA Tesla P40 GPU accelerator is the first to combine an enterprise-grade visual computing platform for HPC rendering, simulation and design with virtual applications, desktops and workstations. This allows organisations the freedom to virtualise both complete visualisation and compute (CUD and OpenCL) workloads.

NVIDIA Tesla P40

The NVIDIA Tesla P40 harnesses industry-leading NVIDIA Pascal architecture to deliver up to 2x the professional graphics performance of the M60. This powerful GPU also supports eight different user profiles, so virtual GPU resources can be efficiently provisioned to meet the needs of the user.

With the NVIDIA Tesla P40 and NVIDIA virtual GPU software, organisations are now able to virtualise high-end applications with massive, complex data sets. Resources are allocated to make sure users have the correct GPU acceleration for the task. The power of these GPUs is shared across a multitude of virtual workstations, desktops and apps. You are able to provide an immersive user experience through virtual workspaces with enhanced management, security and productivity.

Outstanding User Experience

Enjoy the perfect user experience for any workload of vGPU profile. NVIDIA Quadro vDWS software with Tesla P40 GPU supports compute for each vGPU, enabling professional and design engineering workflows at peak performance. The Tesla P40 is capable of delivering twice the graphics performance of the M60. With the new resource scheduler, users can rely on consistent performance.

Optimal Management and Monitoring

Management tools provide vGPU visibility into the host or guest level, with application level monitoring capabilities. This allows businesses to intelligently design, manage and support their end users experience.

Experience instant real-time insight with end-to-end management and monitoring. Integration with VMware vRealise Operations, XenCenter and Citrix Director gives you flexibility and control.

Flexible GPU Infrastructure

Pascal GPUs are able to support 50% more users when compared to a single Maxwell GPU, for scaling high performance, virtual graphics and compute. User profiles are more granular, which enable more specific provisioning of vGPU resources and bigger profile sizes for supporting your most demanding users. The P40 helps significantly lower your TCO.


NVIDIA Tesla P100 GPU accelerators are the first ever AI supercomputing data centre GPUs. By harnessing NVIDIA Pascal GPU architecture, they provide a unified platform for accelerating HPC and AI. With high performance and fewer (but ultra-fast) nodes, the Tesla P100 allows data centres to significantly increase throughput while lowering overall costs.

NVIDIA Tesla P100

The Tesla P100 allows mixed-workload HPC data centres to experience a massive boost in throughput, while also lowering spend. With less nodes and significantly more power per node, customers are able to save up to 70% in overall data centre costs.

Reimagined from silicon to software, the Tesla P100 is designed with innovation at every level. Every ground-breaking technology provides a massive boost in performance, contributing to the creation of the planets fastest compute node.

Exponential Performance Leap with Pascal Architecture

The Tesla P100 is capable of delivering superior performance for HPC and hyperscale workloads. This is enabled by the NVIDIA Pascal architecture. Pascal features more than 21 teraflops of FP16 performance and is optimised to push for exciting new discoveries in deep learning applications. Pascal delivers over 5 and 10 teraflops of double and single precision performance for HPC workloads.

Unprecedented Efficiency with CoWoS with HBM2

The Tesla P100 integrates compute and data into the same package by adding CoWoS (Chip-on-Wafer-on-Substrate) with HBM2 technology to reach up to 3x the memory performance of the NVIDIA Maxwell architecture. Delivering a once in a generation jump in time-to-solution for data-intensive applications.

Applications at Massive Scale with NVIDIA NVLink

The NVIDIA NVLink high-speed bidirectional interconnect is designed to scale applications across a multitude of GPUs through delivering 5x higher performance than modern best-in-class technology.

Simpler Programming with Page Migration Engine

With the Page Migration Engine, developers don't need to spend as much time managing data movement. Applications are now able to scale to near limitless amounts of memory, far past the GPUs physical memory size.

The fastest and highest performance PC graphics card created, the NVIDIA Titan RTX is powered by Turing architecture and delivers 130 Tensor TFLOPs of performance, 576 tensor cores and 24GB of super-fast GDDR6 memory to your PC. The Titan RTX powers machine learning, AI and creative workflows.

NVIDIA Titan RTX

It is difficult to find a better option for dealing with computationally intense workloads than the Titan RTX. Created to dominate in even the most demanding of situations, it brings ultimate speed to your data centre. The Titan RTX is built on NVIDIA's Turing GPU Architecture. It includes the very latest Tensor Core and RT Core technology and is also supported by NVIDIA drivers and SDKs. This enables you to work faster and delivers improved results.

AI models can be trained significantly faster with 576 NVIDIA Turing mixed-precision Tensor Cores providing 130 TLOPS of AI performance. This card works well with all the best-known deep learning frameworks, is compatible with NVIDIA GPU Cloud and is supported by NVIDIA's CUDA-X AI SDK.

It allows for application acceleration, working significantly faster with 4609 NVIDIA Turing CUDA cores accelerating end-to-end data science workflows. With 24 GB GDD44 memory you can process gargantuan sets of data.

The Titan RTX reaches a level of performance far beyond its predecessors. Built with multi-precision Turing Tensor Cores, Titan RTX provides breakthrough performance from FP32, FP16, INT8 and INT4, making quicker training and inferencing of neural networks possible.


NVIDIA Tesla T4 GPUs power the planets most reliable mainstream workstations. They can fit easily into standard data centre infrastructures. Designed into a low-profile, 70-watt package, T4 is powered by NVIDIA Turing Tensor Cores, supplying innovative multi-precision performance to accelerate a vast range of modern applications.

NVIDIA Tesla T4

It is almost certain that we are heading towards a future where each of your customer interactions, every one of your products and services will be influenced and enhanced by Artificial Intelligence. AI is going to become the driving force behind all future business, and whoever adapts first to this change is going to hold the key to business success in the long term. We realise the future will require a computing platform that is able to accelerate the full diversity of modern AI. Allowing businesses to reimagine how they meet customer demands and to cost-effectively scale artificial intelligence-based services.

The NVIDIA T4 GPU accelerates diverse cloud workloads. These include high-performance computing, data analytics, deep learning training and inference, graphics and machine learning. T4 features multi-precision Turing Tensor Cores and new RT Cores. It is based on NVIDIA Turing architecture and comes in a very energy efficient small PCIe form factor. T4 delivers ground-breaking performance at scale.

T4 harnesses revolutionary Turing Tensor Core technology featuring multi-precision computing to deal with diverse workloads. Capable of truly blazing fast speeds, T4 delivers up to 40x higher performance than CPUs.

User engagement will be a vital component of successful AI implementation, with responsiveness being one of the main keys. This will be especially apparent in services such as visual search, conversational AI and recommender systems. Over time as models continue to advance and increase in complexity, ever growing compute capability will be required. T4 provides up to 40x better through, allowing for more requests to be served in real time.

The medium of online video is quite possibly the number one way of delivering information in the modern age. As we move forward into the future, the volume of online videos will only continue to grow exponentially. Simultaneously, the demand for answers to how to efficiently search and gain insights from video continues to grow.

T4 provides ground-breaking performance for AI video applications, featuring dedicated hardware transcoding engines which deliver 2x the decoding performance possible with previous-generation GPUs. T4 is able to decode up to nearly 40 full high definition video streams, making it simple to integrate scalable deep learning into video pipelines to provide inventive, smart video services.


With 32 GB HBM2 memory and powered by the newest GPU architecture NVIDIA Volta, the NVIDIA Tesla V100S delivers the performance of up to 100 CPUs within a single GPU. Allowing data engineers, researchers and scientists to undertake challenges once believed to be impossible.

NVIDIA Tesla V100S

The NVIDIA Tesla V100S is the most advanced breakthrough data centre GPU ever created to accelerate AI, Graphics and HPC. Tesla V100S is the crown jewel of the Tesla data centre computing platform for deep learning, graphics and HPC. Over 450 HPC applications and every major deep learning framework can be accelerated by the Tesla platform, as the V100S provides huge performance gains and cost saving opportunities.

The previous Tesla V100 has had been hailed as the most advanced data centre graphics card, with this new GPU taking things up a notch. Designed for AI acceleration, high performance computing, graphics and data science, the Nvidia Tesla V100S is a real game changer.

The Tesla V100S is an upgrade over the Tesla V100. While both seem similar on the outside, featuring a dual-slot design and a cooler, the performance of the V100S goes above and beyond what was possible with the V100.

The main difference between the two is in the memory capacities available. The NVIDIA Tesla V100S only has a 32 GB HBM2 version and boasts higher boost clock speeds (1601MHz) and memory bandwidth (1134 GBps).

With this enhanced clock speed, the V100S can deliver up to 17.1% higher single and double-precision performance, with 16.4TFLOPs and 8.2TFLOPs respectively in comparison to the original V100. Tensor performance has also been enhanced by 16.1%, now reaching 130TFLOPs.


The NVIDIA A100 GPU provides unmatched acceleration at every scale for data analytics, AI and high-performance computing to attack the very toughest computing challenges. An A100 can efficiently and effectively scale to thousands of GPUs. With NVIDIA Multi-Instance GPU (MIG) technology, it can be partitioned into 7 GPU instances, accelerating workloads of every size.

NVIDIA Tesla A100

The NVIDIA A100 introduces double-precision Tensor Cores, delivering the biggest milestone since double-precision computing was introduced in GPUs. The speed boost this offers can be immense, with a 10-hour double precision simulation running on NVIDIA V100 Tensor Core GPUs being cut down to only 4 hours when run on A100s. High performance applications are also able to leverage TF32 precision in A100s Tensor Cores to reach up to a 10x increased throughput for single-precision dense matrix multiply operations.

In todays world, being able to get the most out of your data is crucial. It is vital to be able to visualise, analysis and transform huge datasets into insights. However, scale-out solutions quite often end up being bogged down as datasets end up spread across many systems. Solutions powered by the A100 deliver the necessary compute power, as well as 1.6TB/sec of memory bandwidth a huge scalability.

The NVIDIA A100 with MIG maximises GPU-accelerated infrastructure utilisation in a way never seen before. With MIG, an A100 GPU can be partitioned into up to 7 independent instances. This can give a multitude of users access to GPU acceleration for their applications and projects.


Broadberry GPU Workstations harness the processing power of nVidia Tesla graphics processing units for millions of applications such as image and video processing, computational biology and chemistry, fluid dynamics simulation, CT image reconstruction, seismic analysis, ray tracing, and much more.

As computing evolves, and processing moves from the CPU to co-processing between the CPU and GPU's NVIDIA invented the CUDA parallel computing architecture to harness the performance benefits.

Speak to Broadberry GPU computing experts to find out more.


Accelerating scientific discovery, visualising big data for insights, and providing smart services to consumers are everyday challenges for researchers and engineers. Solving these challenges takes increasingly complex and precise simulations, the processing of tremendous amounts of data, or training sophisticated deep learning networks These workloads also require accelerating data centres to meet the growing demand for exponential computing.

NVIDIA Tesla is the world's leading platform for accelerated data centres, deployed by some of the world's largest supercomputing centres and enterprises. It combines GPU accelerators, accelerated computing systems, interconnect technologies, development tools and applications to enable faster scientific discoveries and big data insights.

At the heart of the NVIDIA Tesla platform are the massively parallel PU accelerators that provide dramatically higher throughput for compute-intensive workloads - without increasing the power budget and physical footprint of data centres.


Broadberry Celebrating Over 30 Years.


Engineer performing test.Our Rigorous Testing

Before leaving our UK workshop, all Broadberry server and storage solutions undergo a rigorous 48 hour testing procedure. This, along with the high-quality industry leading components ensures all of our server and storage solutions meet the strictest quality guidelines demanded from us.


Broadberry professional.Un-Equaled Flexibility

Our main objective is to offer great value, high-quality server and storage solutions, we understand that every company has different requirements and as such are able to offer un-equaled flexibility in designing custom server and storage solutions to meet our clients' needs.

Trusted by the World's Biggest Brands

We have established ourselves as one of the biggest storage providers in the UK, and since 1989 supplied our server and storage solutions to the world's biggest brands. Our customers include:

NASA, BBC, ITV, SONY, SKY, Disney, Google logos.