Wargaming

World of Tanks

a story of explosive growth
Wargaming
World of Tanks
a story of explosive growth
Deputy cto | 2010 — 2014
Balancing Growth and Player Satisfaction
Following its 2010 launch, Wargaming's World of Tanks rapidly gained immense global popularity.

The only European game cluster quickly became overwhelmed, with players enduring lengthy wait times of 30 minutes or more just to access the game. Furthermore, persistent problems with international internet connections and inefficient routing significantly hindered gameplay for the growing number of users in Russia and the CIS region.
The situation was dire for business. Players facing connectivity issues and lag were less likely to spend money on in-game currency, directly impacting revenue. This dissatisfaction led to players abandoning the game for other options. To address this critical problem, Wargaming brought me on board to find a solution.
What I Developed and Implemented
A tailored suite of techniques
for server and network infrastructure organization, reducing setup times from 60 to just 5 days without compromising quality.
A purpose-built global network infrastructure that really tackled Wargaming's need for rapid server deployment and high-performance player connectivity.
These innovations contributed to the success of World of Tanks, which has garnered numerous prestigious awards and set several Guinness World Records, including the highest number of simultaneous online players on a single MMO server and on one server cluster. It also won awards for being the best MMO game.
Initial Сhallenges
1
Challenge
Existing backbone networks failed to deliver the speed and quality required for optimal WoT gameplay. These networks prioritized basic connectivity and minimal packet loss, neglecting the crucial aspect of optimizing data routes between servers and end-users.
2
Challenge
The existing architecture imposed a ceiling on the number of concurrent players a single game cluster could handle, with a maximum capacity of 60,000 to 80,000. This limitation meant the single cluster was quickly approaching its limit, necessitating the establishment of an additional point of presence to accommodate both backup requirements and the expanding player base.
3
Challenge
The Oversun representative responsible for liaising with Wargaming proved to be a scammer, resorting to ultimatums and threats of server shutdowns.
Leveraging my expertise, I was able to successfully navigate these challenges and establish a solid foundation for future growth.
Key Factors in Overcoming the Challenges
Connectivity
Extensive market knowledge and direct relationships with telecom operators
Expertise
Experience in building and operating Corbina Telecom's network, one of Russia's largest backbone and broadband providers
Innovation
Expertise in launching and managing business processes at Oversun, a pioneering commercial data center in Russia
Prevention
Understanding of strategies to mitigate the detrimental actions of a key employee

Europe

Expanding Beyond Limits
WoT's European servers

When I joined Wargaming, they were already operating a server cluster based in Munich. However, this cluster had a limited capacity, accommodating only 60,000 to 80,000 concurrent players. Unfortunately, the data center housing the cluster lacked the spare capacity needed to expand and add more servers.


World of Tanks was experiencing a surge in popularity across Europe, and the Munich server cluster was struggling to keep up. This resulted in a noticeable decline in the quality of the gameplay experience.

Players Encountered Frustrating Issues
World of Tanks server cluster based in Munich
  • Evening queues
    Waiting for an available server slot became a common occurrence during peak hours.
  • Unstable connections
    Frequent disconnects and lag disrupted gameplay.
  • Slow update speeds
    Downloading game updates became a tedious and time-consuming process.
World of Tanks server cluster based in Munich
These issues, among others, transformed the gameplay experience from one of enjoyment to pure frustration. Players began to abandon the game, venting their grievances on forums and social media. Some were even discouraged from trying it altogether. This exodus of players posed a significant threat to the future growth and development of the World of Tanks project.
To address these challenges, we decided to establish a new server cluster in Amsterdam, utilizing the Equinix AM3 data center. Despite having a European contractor on board and pre-established terms, the finalization of this new point of presence for our server and network infrastructure encountered unexpected delays. By the time I got involved, the project was already more than two months behind schedule.
World of Tanks server cluster in Amsterdam
Meanwhile, players continued to suffer through frustrating gameplay experiences on the overburdened Munich servers. This not only impacted player satisfaction but also resulted in financial losses for Wargaming.

The root of the problem lay in the existing technical and organizational infrastructure, which simply wasn't equipped to handle the explosive growth of the product.
Case Study
Establishing a standard point of presence for the World of Tanks server and network infrastructure is a complex undertaking. It involves setting up a 16-rack cage enclosure, complete with cable organizers and PDUs.

Additionally, approximately 500 servers and 60 network devices need to be installed and interconnected with over 2,500 UTP patch cables and 1,100 power cables. This is just a glimpse into the numerous tasks required for a fully operational setup.

Implementing a 16-rack cage of this complexity took World of Tanks over two months at the time — an unacceptably long timeframe given our urgent need to expand server capacity. It's important to note that this timeframe was specific to World of Tanks and doesn't reflect industry standards for such implementations.

I traveled to Amsterdam to personally assess the ongoing installation process.

Upon arrival, I was disappointed to find the field engineer socializing with data center staff instead of actively working on the project. Furthermore, it became apparent that the methods used for equipment installation and switching were far from optimal

Determined to expedite the process, I rolled up my sleeves and joined the field engineer, working side-by-side for 12 to 14 hours each day to install and switch equipment. My commitment seemed to inspire the engineer, who also began working with greater focus and efficiency.
Within a week, we successfully handed over the equipment to the network engineers and server administrators, who began the final stages of configuration and preparation for launch. Just one week later, the Amsterdam cluster roared to life, welcoming eager players. This doubled Wargaming's server capacity and significantly improved the overall gameplay experience. As a result, players spent more time engaged in the game, contributing to increased revenue for the company.

Furthermore, immersing myself in the intricacies of the equipment installation and switching process allowed me to identify several opportunities for optimization.
Proposed Optimization Strategies
  • Pre-rack preparation
    Completing all rack preparation, such as installing cable organizers and pre-labeled patch cords, before equipment installation.
    01
  • Single-step equipment installation
    Unpacking and installing all equipment in one go, eliminating the need for post-installation standups and streamlining the switching process for each rack.
    02
  • Pre-terminated solutions
    Utilizing pre-terminated cables for connecting patch panels to further expedite the process.
    03
  • Parallel infrastructure implementation
    The previous sequential approach to network and server deployment caused delays for server administrators. My new concurrent process ensures immediate network availability upon server installation, optimizing efficiency and productivity.
    04

USA

US Expansion
building a game cluster for E3
The United States presented another exciting growth opportunity for World of Tanks. However, players in the US were experiencing subpar connectivity to the European servers. To address this and capitalize on the expanding American player base, we decided to launch a dedicated game cluster in the heart of Silicon Valley. We strategically planned the cluster's launch to coincide with the game's debut at the renowned E3 exhibition in Los Angeles, maximizing exposure and generating excitement among American players.

With the E3 exhibition looming less than a month away, I took charge of the project, knowing that time was of the essence. It was clear that achieving our ambitious deadline would require implementing the optimization strategies I had developed after the Amsterdam project, despite not having had the opportunity to test them in a real-world scenario.

Touching down at San Francisco airport with just three weeks until E3, I immediately dove into the logistical whirlwind of equipment delivery and placement. The next week was a frantic scramble, maneuvering and distributing nearly 500 pieces of equipment throughout the sprawling data center. With the clock ticking and two weeks remaining, the pressure was on.
From 2 Months to 5 Days: E3 Launch Efficiency

Driven by the tight deadline, I worked tirelessly for 12 to 14 hours each day, actively guiding the team and implementing the optimized processes. This involved everything from hands-on adjustments to sourcing necessary materials from local stores. Through this relentless effort, we successfully handed over the equipment to the network engineers and administrators within just five days, putting us on track for our E3 launch.


Ultimately, the implemented process optimizations proved to be a resounding success, slashing the launch time by an astounding 90% — from two months down to a mere five days. This remarkable achievement paved the way for a successful and timely launch at the E3 exhibition.

The game's popularity continued its upward trajectory, with online player numbers soaring. Fueled by this success and armed with our refined deployment process, we opened another site just two months later in Washington, D.C., further expanding our reach and solidifying World of Tanks' presence in the United States.
Meeting Global Growth
the need for scalable deployment solutions
As demand for new game clusters surged globally, it became clear that we needed to scale our operations significantly. To achieve this, I envisioned building several autonomous teams of field engineers, each capable of independently launching new sites without my direct involvement.
Data center industry experienced engineers
Leveraging my extensive network within the data center industry, I swiftly assembled a team of highly skilled and experienced engineers. After a period of intensive training, we successfully launched five new sites across various locations within the next six months. This rapid expansion not only addressed the growing demand but also allowed us to proactively tackle another emerging challenge.

The growing number of points of presence, network devices, service provider connections, and change requests led to an overwhelming influx of trouble tickets. Unfortunately, the only thing declining amidst this growth was the quality of our network operations and Wargaming's satisfaction with the new backbone network.

The root of the problem lay in our limited network support resources. With only a single network architect managing the entire infrastructure, providing timely responses to network issues became virtually impossible. This bottleneck hindered our ability to maintain optimal network performance and address the growing number of challenges effectively.

Building a Network Team

Recognizing the urgent need for a dedicated network team, I was given full autonomy to build one from the ground up. However, the task was complicated by the team's designated location: Minsk, a city that, at the time, lacked appeal for many tech professionals.

To lead the network department, I reached out to my trusted colleague and renowned network expert, Oleg Yudin. We had previously collaborated on building the backbone network for Corbina Telecom, and I knew his expertise would be invaluable in tackling the challenges ahead.

Together we assembled a team of talented network engineers who stabilized the network within a couple of months. Over the next three months, we further refined our operations, elevating the backbone network to the standards of a top-tier telecommunications provider.

Impact of efforts
This resulted in a dramatic improvement in satisfaction with the infrastructure, both within Wargaming and among our external partners.

Although our journeys with Wargaming eventually diverged, the established teams, implemented methodologies, and optimized processes continued to serve the company for years to come. They played a crucial role in ensuring the delivery of high-quality infrastructure for World of Tanks and other Wargaming projects, leaving a lasting legacy of efficiency and excellence.

For a rapidly growing product, unwavering technical and technological support is paramount. However, this doesn't always necessitate reinventing the wheel. Often, the key lies in swiftly assembling the optimal configuration using existing organizational, technical, and process-based elements.

Book a 30 minutes e-meeting

E-meetings are a quick and easy way to get to know each other,

see if we click, and set the stage

for potential collaborations down the line.

Made on
Tilda