NVIDIA Tegra X1 vs Google Tensor G2

Last updated:

CPU comparison with benchmarks


NVIDIA Tegra X1 CPU1 vs CPU2 Google Tensor G2
NVIDIA Tegra X1 Google Tensor G2

CPU comparison

NVIDIA Tegra X1 or Google Tensor G2 - which processor is faster? In this comparison we look at the differences and analyze which of these two CPUs is better. We compare the technical data and benchmark results.

The NVIDIA Tegra X1 has 8 cores with 8 threads and clocks with a maximum frequency of 2.00 GHz. Up to 8 GB of memory is supported in 2 memory channels. The NVIDIA Tegra X1 was released in Q1/2017.

The Google Tensor G2 has 8 cores with 8 threads and clocks with a maximum frequency of 2.85 GHz. The CPU supports up to 12 GB of memory in 2 memory channels. The Google Tensor G2 was released in Q4/2022.
NVIDIA Tegra (2) Family Google Tensor (3)
NVIDIA Tegra X1 (2) CPU group Google Tensor G2 (1)
2 Generation 2
Cortex-A57/-A53 Architecture G2
Mobile Segment Mobile
-- Predecessor Google Tensor
-- Successor --

CPU Cores and Base Frequency

The NVIDIA Tegra X1 has 8 CPU cores and can calculate 8 threads in parallel. The clock frequency of the NVIDIA Tegra X1 is 2.00 GHz while the Google Tensor G2 has 8 CPU cores and 8 threads can calculate simultaneously. The clock frequency of the Google Tensor G2 is at 2.85 GHz.

NVIDIA Tegra X1 Characteristic Google Tensor G2
8 Cores 8
8 Threads 8
hybrid (big.LITTLE) Core architecture hybrid (Prime / big.LITTLE)
No Hyperthreading No
No Overclocking ? No
2.00 GHz
4x Cortex-A57
A-Core 2.85 GHz
2x Cortex-X1
2.00 GHz
4x Cortex-A53
B-Core 2.35 GHz
2x Cortex-A78
-- C-Core 1.80 GHz
4x Cortex-A55

Artificial Intelligence and Machine Learning

Processors with the support of artificial intelligence (AI) and machine learning (ML) can process many calculations, especially audio, image and video processing, much faster than classic processors. Algorithms for ML improve their performance the more data they have collected via software. ML tasks can be processed up to 10,000 times faster than with a classic processor.

NVIDIA Tegra X1 Characteristic Google Tensor G2
-- AI hardware Google Tensor AI
-- AI specifications Google Edge TPU @ 4 TOPS

Internal Graphics

The NVIDIA Tegra X1 or Google Tensor G2 has integrated graphics, called iGPU for short. The iGPU uses the system's main memory as graphics memory and sits on the processor's die.

NVIDIA Tegra X1 (Maxwell) GPU ARM Mali-G710 MP7
0.30 GHz GPU frequency 0.90 GHz
1.00 GHz GPU (Turbo) --
1 GPU Generation Vallhall 3
20 nm Technology 4 nm
1 Max. displays 1
2 Compute units 7
256 Shader --
No Hardware Raytracing No
No Frame Generation No
2 GB Max. GPU Memory --
12 DirectX Version 12

Hardware codec support

A photo or video codec that is accelerated in hardware can greatly accelerate the working speed of a processor and extend the battery life of notebooks or smartphones when playing videos.

NVIDIA Tegra X1 (Maxwell) GPU ARM Mali-G710 MP7
Decode Codec h265 / HEVC (8 bit) Decode / Encode
Decode Codec h265 / HEVC (10 bit) Decode / Encode
Decode / Encode Codec h264 Decode / Encode
Decode Codec VP9 Decode / Encode
Decode Codec VP8 Decode / Encode
No Codec AV1 Decode
Decode Codec AVC Decode / Encode
Decode Codec VC-1 Decode / Encode
Decode / Encode Codec JPEG Decode / Encode

Memory & PCIe

The NVIDIA Tegra X1 can use up to 8 GB of memory in 2 memory channels. The maximum memory bandwidth is 25.6 GB/s. The Google Tensor G2 supports up to 12 GB of memory in 2 memory channels and achieves a memory bandwidth of up to 53.0 GB/s.

NVIDIA Tegra X1 Characteristic Google Tensor G2
LPDDR4-3200 Memory LPDDR5-5500
8 GB Max. Memory 12 GB
2 (Dual Channel) Memory channels 2 (Dual Channel)
25.6 GB/s Max. Bandwidth 53.0 GB/s
No ECC No
2.50 MB L2 Cache 8.00 MB
-- L3 Cache 4.00 MB
-- PCIe version --
-- PCIe lanes --
-- PCIe Bandwidth --

Thermal Management

The thermal design power (TDP for short) of the NVIDIA Tegra X1 is 7 W, while the Google Tensor G2 has a TDP of 10 W. The TDP specifies the necessary cooling solution that is required to cool the processor sufficiently.

NVIDIA Tegra X1 Characteristic Google Tensor G2
7 W TDP (PL1 / PBP) 10 W
-- TDP (PL2) --
-- TDP up --
5 W TDP down --
-- Tjunction max. --

Technical details

The NVIDIA Tegra X1 is manufactured in 20 nm and has 2.50 MB cache. The Google Tensor G2 is manufactured in 4 nm and has a 12.00 MB cache.

NVIDIA Tegra X1 Characteristic Google Tensor G2
20 nm Technology 4 nm
Chiplet Chip design Chiplet
Armv8-A (64 bit) Instruction set (ISA) Armv8-A (64 bit)
-- ISA extensions --
-- Socket --
None Virtualization None
No AES-NI No
Operating systems Android
Q1/2017 Release date Q4/2022
-- Release price --
show more data show more data


Rate these processors

Here you can rate the NVIDIA Tegra X1 to help other visitors make their purchasing decisions. The average rating is 3.8 stars (6 ratings). Rate now:
Here you can rate the Google Tensor G2 to help other visitors make their purchasing decisions. The average rating is 4.1 stars (16 ratings). Rate now:


Average performance in benchmarks

⌀ Single core performance in 1 CPU benchmarks
NVIDIA Tegra X1 (22%)
Google Tensor G2 (100%)
⌀ Multi core performance in 1 CPU benchmarks
NVIDIA Tegra X1 (24%)
Google Tensor G2 (100%)

Geekbench 6 (Single-Core)

Geekbench 6 is a benchmark for modern computers, notebooks and smartphones. What is new is an optimized utilization of newer CPU architectures, e.g. based on the big.LITTLE concept and combining CPU cores of different sizes. The single-core benchmark only evaluates the performance of the fastest CPU core, the number of CPU cores in a processor is irrelevant here.
NVIDIA Tegra X1 NVIDIA Tegra X1
8C 8T @ 2.00 GHz
308 (22%)
Google Tensor G2 Google Tensor G2
8C 8T @ 2.85 GHz
1426 (100%)

Geekbench 6 (Multi-Core)

Geekbench 6 is a benchmark for modern computers, notebooks and smartphones. What is new is an optimized utilization of newer CPU architectures, e.g. based on the big.LITTLE concept and combining CPU cores of different sizes. The multi-core benchmark evaluates the performance of all of the processor's CPU cores. Virtual thread improvements such as AMD SMT or Intel's Hyper-Threading have a positive impact on the benchmark result.
NVIDIA Tegra X1 NVIDIA Tegra X1
8C 8T @ 2.00 GHz
798 (24%)
Google Tensor G2 Google Tensor G2
8C 8T @ 2.85 GHz
3342 (100%)

iGPU - FP32 Performance (Single-precision GFLOPS)

The theoretical computing performance of the internal graphics unit of the processor with simple accuracy (32 bit) in GFLOPS. GFLOPS indicates how many billion floating point operations the iGPU can perform per second.
NVIDIA Tegra X1 NVIDIA Tegra X1
NVIDIA Tegra X1 (Maxwell) @ 1.00 GHz
512 (73%)
Google Tensor G2 Google Tensor G2
ARM Mali-G710 MP7 @ 0.90 GHz
700 (100%)

Geekbench 5, 64bit (Single-Core)

Geekbench 5 is a cross plattform benchmark that heavily uses the systems memory. A fast memory will push the result a lot. The single-core test only uses one CPU core, the amount of cores or hyperthreading ability doesn't count.
NVIDIA Tegra X1 NVIDIA Tegra X1
8C 8T @ 2.00 GHz
0 (0%)
Google Tensor G2 Google Tensor G2
8C 8T @ 2.85 GHz
1068 (100%)

Geekbench 5, 64bit (Multi-Core)

Geekbench 5 is a cross plattform benchmark that heavily uses the systems memory. A fast memory will push the result a lot. The multi-core test involves all CPU cores and taks a big advantage of hyperthreading.
NVIDIA Tegra X1 NVIDIA Tegra X1
8C 8T @ 2.00 GHz
0 (0%)
Google Tensor G2 Google Tensor G2
8C 8T @ 2.85 GHz
3149 (100%)

AnTuTu 9 Benchmark

The AnTuTu 9 benchmark is very well suited to measuring the performance of a smartphone. AnTuTu 9 is quite heavy on 3D graphics and can now also use the "Metal" graphics interface. In AnTuTu, memory and UX (user experience) are also tested by simulating browser and app usage. AnTuTu version 9 can compare any ARM CPU running on Android or iOS. Devices may not be directly comparable when benchmarked on different operating systems.

In the AnTuTu 9 benchmark, the single-core performance of a processor is only slightly weighted. The rating is made up of the multi-core performance of the processor, the speed of the working memory, and the performance of the internal graphics.
NVIDIA Tegra X1 NVIDIA Tegra X1
8C 8T @ 2.00 GHz
0 (0%)
Google Tensor G2 Google Tensor G2
8C 8T @ 2.85 GHz
789419 (100%)

Performance for Artificial Intelligence (AI) and Machine Learning (ML)

Processors with the support of artificial intelligence (AI) and machine learning (ML) can process many calculations, especially audio, image and video processing, much faster than classic processors. The performance is given in the number (trillions) of arithmetic operations per second (TOPS).
NVIDIA Tegra X1 NVIDIA Tegra X1
8C 8T @ 2.00 GHz
0 (0%)
Google Tensor G2 Google Tensor G2
8C 8T @ 2.85 GHz
4 (100%)

Devices using this processor

NVIDIA Tegra X1 Google Tensor G2
NVIDIA Shield (1. Gen)
Nintendo Switch
Google Pixel 7
Google Pixel 7 Pro

Popular comparisons containing this CPUs

1. Qualcomm Snapdragon 8 Gen 1Google Tensor G2 Qualcomm Snapdragon 8 Gen 1 vs Google Tensor G2
2. Google Tensor G3Google Tensor G2 Google Tensor G3 vs Google Tensor G2
3. Qualcomm Snapdragon 888Google Tensor G2 Qualcomm Snapdragon 888 vs Google Tensor G2
4. Google TensorGoogle Tensor G2 Google Tensor vs Google Tensor G2
5. Qualcomm Snapdragon 7+ Gen 2Google Tensor G2 Qualcomm Snapdragon 7+ Gen 2 vs Google Tensor G2
6. Google Tensor G2Qualcomm Snapdragon 695 5G Google Tensor G2 vs Qualcomm Snapdragon 695 5G
7. Google Tensor G2Qualcomm Snapdragon 865 Google Tensor G2 vs Qualcomm Snapdragon 865
8. Google Tensor G2Apple A15 Bionic (5-GPU) Google Tensor G2 vs Apple A15 Bionic (5-GPU)
9. Google Tensor G2Qualcomm Snapdragon 765G Google Tensor G2 vs Qualcomm Snapdragon 765G
10. Qualcomm Snapdragon 8+ Gen 1Google Tensor G2 Qualcomm Snapdragon 8+ Gen 1 vs Google Tensor G2


back to index