Month: October 2024
Blog

What is SSL and how to choose one for your website
If you look at the address bar in the browser tab, you can see the symbols http or https before the name of any website. This refers to the protocol used to connect a user’s browser to the website. HTTP stands for Hypertext Transfer Protocol. It’s great for transferring messages, but unfortunately it doesn’t provide enough protection for your Internet connection.
The HTTPS protocol is used to protect data. The letter “S” here means “security” and implies the use of a cryptographic encryption. HTTPS requires a website to have a special SSL certificate. This is a unique digital signature that confirms authenticity and security. Packets over https are transmitted in encrypted form, and even if a hacker intercepts them, he will not be able to decrypt and use them.
It is possible to check if a website has an SSL certificate by checking the lock icon next to its address. If it is missing, the browser will display a warning about insecurity: the packets you send may be stolen It is strictly forbidden to create a personal profile on such a resource and especially to enter bank card credentials.
Who needs an SSL certificate?
SSL certificate is essential for web resources that involve working with personal data. It is impossible to secure transactions on marketplaces and other services without an encrypted connection. Cryptography reduces the vulnerability and probability of data interception by intruders. In addition, there are other tangible benefits:
- Increased trust of visitors. People get used to see the lock icon in the address bar. All the large IT-projects switched to HTTPS a long time ago. SSL certificate shows a visitor that he is protected.
- Ability to connect third-party services. Modern payment and auxiliary web services (e.g., various Google tools) can only work with IT resources that have an SSL certificate. If you need to integrate third-party elements, you can’t do without SSL.
- Improved search engine ranking. Search engines motivate website owners to certify web projects in their own way. For example, Google directly claims that HTTPS is a ranking factor in search. So, if you don’t want to end up on the fourth page of a search results for your topic, enable SSL.
- Compliance with personal data protection laws. For example, European GDPR law requires legal entities including websites that collect personal data to use encryption. SSL helps meet the requirements of these type of laws.
Types of certificates
Depending on the source of origin, the number of connected domains, as well as validation methods, there are different classifications of certificates.
- Self-Signed. This certificate is signed directly by your server. Any user can generate it by himself. However, there is not much use for it – browsers will still display a notification that the Internet connection is not secure when you access a page with a self-signed certificate.
- Signed by a certification authority (CA). Such certificates are signed by special organizations, which guarantee the authenticity of a digital signature. There are not many certification authorities, the most famous are: COMODO, CloudFlare, LetsEncrypt.
Certificates from certification centers are divided into groups based on the type of data that should be verified in order to obtain it:
- Domain Validation (DV). Basic level. It provides cryptographic encryption and HTTPS connection. But it does not contain proof of the actual existence of the company.
- Organization Validation (OV). Allows establishing an encrypted web connection, but also certifies the validity of the company received this certificate. Only official legal entities can obtain it.
- Extended Validation (EV). The highest level of certification available to online businesses. In order to obtain it, you have to go through extended validation and provide the necessary documentation to prove your rights for the domain name.
Certification Authority also allows you to enable additional options:
- SAN. Upon obtaining a certificate, clients specify a list of domains that will be covered by the certificate.
- WildCard. Certification covers the domain name and all subdomains.
How to choose an SSL certificate?
The choice of certificate should be made based on your business needs. A FinTech project and a landing page for booking a webinar need a different level of SSL certificate. It is preferable to install a certificate before launching a web project. This will help to immediately take more favorable positions in search engines. You can add an HTTPS exchange very quickly, but the Google algorithms may notice it only after several months.
SSL certificates pricing varies from zero to several hundred dollars per year. If you are a local business owner or have a non-profit IT project, you can get a free SSL certificate. However, a large project that involves the processing of personal data and payments – it is recommended to get a paid one. Once you have decided on the number of domains to be connected and the type of verification, you can proceed to selecting a certification authority that offers the required set of services.
How to get a free SSL?
Free SSL certificates mostly belong to the Domain Validation category. They are suitable for small websites which do not ask for any sensitive data, e.g. a credit card number. DV-certificate is suitable for:
- a personal blog;
- a company’s landing page;
- one-page website for registering for an event.
You can get a free SSL certificate from almost any CA. Most of them provide certificates with short validity period. For example, Let’s Encrypt and COMODO offer free DV certificates for 90 days, as well as wildcards. In addition, many hosting providers provide DV certificates in a bundle with your hosting package.
However, DV does not guarantee that the website belongs to a real legal entity. For payment systems, marketplaces and other Internet projects that request banking information, it is better to have OV or EV certification.
How to install an SSL certificate on my website?
To make an SSL certificate working, you need to do the following:
- Preconfigure your server and make sure that the data in the WHOIS section matches the one sent to the CA.
- Create a request for signing the SSL on your server. Your hosting provider can help here.
- Send a request to verify the SSL certificate and information about the organization and domain name.
- Install ready SSL certificate on the site after validation.
Depending on the selected CA and verification method, certificate release may take from several minutes to several weeks (for extended validation).
Summing up
An SSL certificate provides a secure data exchange between a website and a user’s computer. Thanks to cryptography, hackers will not be able to steal your data for their own purposes. It is not only about protection, but also about the trust of your visitors, search engine ranking and possibility of installing additional options.
There are many companies on the market that release SSL. Based on your needs and objectives, you should install an SSL certificate to your website for free or choose a paid one which provides better protection and a range of features.
We are always on duty and ready to assist!
Blog

CDN for streaming services
CDN (Content Delivery Network) is a network infrastructure that optimizes the delivery of data to end users through linked sites or points of presence (PoP) located in different geographic regions. This ensures load balancing by processing requests closer to users, which ultimately provides a faster speed for visitors of a web resource.
How it works
When using regular hosting, the browser of the visitor to the Internet resource interacts directly with the client hosting, where all data is stored. CDN adds intermediate links to this chain in the form of special caching sites scattered around the world.
The network of data centers stores the cache or portions of the files for the quick delivery to end users. And even if the client site is hosted in New York, a visitor from Japan can surf the website with minimal delay, because he will get the cache of the web page from the closest point to him.
Advantages of CDN
The technology has a number of advantages:
- High loading speed. Since the distance between consumers and hosting servers is reduced to a minimum, the response time rarely exceeds a fraction of a second.
- Reducing the load on the main server. Since the traffic is evenly distributed between network elements, client infrastructure is used only for syncing and changing settings.
- Handling of “heavy” media files. Resource-intensive content (e.g., video) is fetched by end-users in batches as it is downloaded. This leads to channel overload, especially if the user’s Internet connection is slow and the content is downloaded in small chunks. With Content Delivery Network, fragments are stored on network servers, which allows you to handle tens or hundreds of thousands of requests, regardless of their size, without lags.
- Fault tolerance. The structure can consist of hundreds of points of presence distributed across countries and continents. With this configuration, the system provides significant redundancy of cached data storage.
- Scalability and reliability. CDN is virtually unlimited in the number of requests and expands as the load increases. The peak of millions of sessions will not affect the availability of information, it will be synchronized without any disruptions. And even in case of an emergency, users will still have access to static content on their local Point of Presence.
Promotion of the site in search results. Long loading of a web page affects user experience, which significantly reduces conversion rates and increases the number of bounces. This leads to a decrease in the search engine ranking. Distribution of requests between data centers improves conversion and, as a consequence, affects the position of the resource in the top search engine.
When CDN is a must-have?
Now, when we know what a CDN is, let’s discuss who needs it. CDN is an excellent choice for IT-projects which have a wide audience from remote corners of the world. This network technology increases reliability and speed of downloading at any volume of incoming stream. It is impossible to imagine a modern game portal, or a popular mobile app without it.
In the last decade, streaming services, which allow you to play information directly from the Internet, have become widespread. Online broadcasting of movies or sports events in good quality, listening to music became a norm for millions of users. Content is no longer stored on devices, it goes online and CDN has become a real salvation for such projects.
However, this approach is not necessary for ordinary websites that don’t involve extensive geography or one-step sending of “heavy” files. A business website of a company operating in the California area which is hosted in Los Angeles doesn’t need a CDN. If you face long “rendering” of your website in the browser, most likely, there is a problem with the code, not with the network.
Specifics of the technology
When using a content delivery network, you should understand the principles of its operation in detail. When choosing a provider, you may encounter different algorithms that determine which data center to contact. They are based on two main web technologies – AnyCast and GeoDNS.
The difference is that with AnyCast, the connection is routed to the closest points of presence at a dedicated address. The website visitor’s ISP receives route announcements to the CDN and provides an optimal path connection. If the connection is lost, the client will connect to the next closest PoP. This technology is based on the BGP protocol, which is the basis of all the redundancy and route selection on the Internet.
With GeoDNS, IP addresses are assigned according to the point of the user’s presence. CDN determines the location of the sender and forwards it to the closest server based on geographic location in accordance with pre-defined rules.
CDN providers. How to choose?
CDN services are provided by a number of local and global companies. To avoid making the wrong choice, you should pay attention to the following:
- Type of content. Different data requires different bandwidths. Streaming and online games require a wider channel than for a static content. However, it will be more expensive.
- Location of the main audience. You need to determine where most of your users are coming from. And then see if there are CDN PoPs in these regions.
- Data transmission security. It is important to make sure that the CDN supports TLS (transport layer security) certificates, and allows you to use your own SSL certificate.
- Flexibility of the settings. The more options you have in your control panel, the more effectively you can optimize traffic routes and increase the speed of delivery.
- Tech support. The speed and proficiency of the support will determine the reliability of your service, and therefore the revenue. Having 24/7 support will be a great option.
- Pricing and a trial period. Pricing and a trial period. For many web projects, the cost becomes a major determining factor. It is better to calculate the estimated budget in advance, taking into account CDN costs. In addition, some providers offer a free trial period to get acquainted with their services.
Melbicom has its own CDN network with data centers located on all continents in 36 countries. We offer caching of static content and video files, full support for the HTTP/2 protocol and detailed statistics for all of our tariffs. Thanks to the brotli-compression protocol, customers and visitors can save a lot of their budgets on traffic.
Summing up the results
What is CDN? It is a great solution, thanks to which a subscriber from Bulgaria can easily watch a new season of “Paper House” on Netflix. Content Delivery Network makes the Internet more accessible. It is becoming the basis of many modern IT solutions: marketplaces, streaming platforms, and game portals. This web technology speeds up site loading and ensures the continuous operation of Internet projects. There are many offers on the market, but if your project is aimed at a small, geographically concentrated audience and does not involve the constant transfer of “heavy” media files, think twice about the reasonability of using CDN. After all, you have to pay for everything.
We are always on duty and ready to assist!
Blog

What is CDN and where it can be used?
CDN (Content Delivery Network) is a network infrastructure that optimizes the delivery of data to end users through linked sites or points of presence (PoP) located in different geographic regions. This ensures load balancing by processing requests closer to users, which ultimately provides a faster speed for visitors of a web resource.
How it works
When using regular hosting, the browser of the visitor to the Internet resource interacts directly with the client hosting, where all data is stored. CDN adds intermediate links to this chain in the form of special caching sites scattered around the world.
The network of data centers stores the cache or portions of the files for the quick delivery to end users. And even if the client site is hosted in New York, a visitor from Japan can surf the website with minimal delay, because he will get the cache of the web page from the closest point to him.
Advantages of CDN
The technology has a number of advantages:
- High loading speed. Since the distance between consumers and hosting servers is reduced to a minimum, the response time rarely exceeds a fraction of a second.
- Reducing the load on the main server. Since the traffic is evenly distributed between network elements, client infrastructure is used only for syncing and changing settings.
- Handling of “heavy” media files. Resource-intensive content (e.g., video) is fetched by end-users in batches as it is downloaded. This leads to channel overload, especially if the user’s Internet connection is slow and the content is downloaded in small chunks. With Content Delivery Network, fragments are stored on network servers, which allows you to handle tens or hundreds of thousands of requests, regardless of their size, without lags.
- Fault tolerance. The structure can consist of hundreds of points of presence distributed across countries and continents. With this configuration, the system provides significant redundancy of cached data storage.
- Scalability and reliability. CDN is virtually unlimited in the number of requests and expands as the load increases. The peak of millions of sessions will not affect the availability of information, it will be synchronized without any disruptions. And even in case of an emergency, users will still have access to static content on their local Point of Presence.
Promotion of the site in search results. Long loading of a web page affects user experience, which significantly reduces conversion rates and increases the number of bounces. This leads to a decrease in the search engine ranking. Distribution of requests between data centers improves conversion and, as a consequence, affects the position of the resource in the top search engine.
When CDN is a must-have?
Now, when we know what a CDN is, let’s discuss who needs it. CDN is an excellent choice for IT-projects which have a wide audience from remote corners of the world. This network technology increases reliability and speed of downloading at any volume of incoming stream. It is impossible to imagine a modern game portal, or a popular mobile app without it.
In the last decade, streaming services, which allow you to play information directly from the Internet, have become widespread. Online broadcasting of movies or sports events in good quality, listening to music became a norm for millions of users. Content is no longer stored on devices, it goes online and CDN has become a real salvation for such projects.
However, this approach is not necessary for ordinary websites that don’t involve extensive geography or one-step sending of “heavy” files. A business website of a company operating in the California area which is hosted in Los Angeles doesn’t need a CDN. If you face long “rendering” of your website in the browser, most likely, there is a problem with the code, not with the network.
Specifics of the technology
When using a content delivery network, you should understand the principles of its operation in detail. When choosing a provider, you may encounter different algorithms that determine which data center to contact. They are based on two main web technologies – AnyCast and GeoDNS.
The difference is that with AnyCast, the connection is routed to the closest points of presence at a dedicated address. The website visitor’s ISP receives route announcements to the CDN and provides an optimal path connection. If the connection is lost, the client will connect to the next closest PoP. This technology is based on the BGP protocol, which is the basis of all the redundancy and route selection on the Internet.
With GeoDNS, IP addresses are assigned according to the point of the user’s presence. CDN determines the location of the sender and forwards it to the closest server based on geographic location in accordance with pre-defined rules.
CDN providers. How to choose?
CDN services are provided by a number of local and global companies. To avoid making the wrong choice, you should pay attention to the following:
- Type of content. Different data requires different bandwidths. Streaming and online games require a wider channel than for a static content. However, it will be more expensive.
- Location of the main audience. You need to determine where most of your users are coming from. And then see if there are CDN PoPs in these regions.
- Data transmission security. It is important to make sure that the CDN supports TLS (transport layer security) certificates, and allows you to use your own SSL certificate.
- Flexibility of the settings. The more options you have in your control panel, the more effectively you can optimize traffic routes and increase the speed of delivery.
- Tech support. The speed and proficiency of the support will determine the reliability of your service, and therefore the revenue. Having 24/7 support will be a great option.
- Pricing and a trial period. Pricing and a trial period. For many web projects, the cost becomes a major determining factor. It is better to calculate the estimated budget in advance, taking into account CDN costs. In addition, some providers offer a free trial period to get acquainted with their services.
Melbicom has its own CDN network with data centers located on all continents in 36 countries. We offer caching of static content and video files, full support for the HTTP/2 protocol and detailed statistics for all of our tariffs. Thanks to the brotli-compression protocol, customers and visitors can save a lot of their budgets on traffic.
Summing up the results
What is CDN? It is a great solution, thanks to which a subscriber from Bulgaria can easily watch a new season of “Paper House” on Netflix. Content Delivery Network makes the Internet more accessible. It is becoming the basis of many modern IT solutions: marketplaces, streaming platforms, and game portals. This web technology speeds up site loading and ensures the continuous operation of Internet projects. There are many offers on the market, but if your project is aimed at a small, geographically concentrated audience and does not involve the constant transfer of “heavy” media files, think twice about the reasonability of using CDN. After all, you have to pay for everything.
We are always on duty and ready to assist!
Blog

Data center reliability levels. Tier 1, 2, 3, 4.
There are many parameters of data centers, which determine their reliability and cost of its services. In the first half of 1990s the Uptime Institute (USA) has developed a classification of data center reliability consisting of four levels (Tier 1, 2, 3, 4). Thanks to this, customers can clearly understand what to expect from the provider. It is worth to note that although only Uptime Institute Professional Services organization can issue the official certificate, the certification criteria are described in detail in the documentation, so some service providers may claim that they meet the requirements of a particular Tier.
What are the criteria for evaluation of reliability?
Levels of data centers differ in many ways, here are the main ones:
- Equipment redundancy. This parameter defines the presence of redundant components in the structure. These components provide failure-free operation, in case of failure of the main elements. Redundancy is carried out according to schemes N+1 (one backup component is added to each type of elements) and 2(N+1) (two parallel systems and a backup component for each type).
- Distribution paths. This is a characteristic of the organization of engineering systems: communication cables, cooling, power supply. Redundant distribution paths provide a high speed of service and increase fault tolerance.
- Carrying out maintenance without shutting down. High-level facilities are equipped with redundant components that allow you to repair and maintain equipment without interruptions.
- Total annual downtime. The number of hours per year when the servers are not available.
- The percentage of fault tolerance. The ratio of downtime to the length of the year.
Based on these criteria, each data center is assigned a certain Tier. The higher it is, the better.
Tier 1 – Basic level
Tier I is the lowest level of the data center according to this classification. Such a data center has a dedicated room for hardware and equipped with cooling systems and uninterruptible power supply. To receive a basic certificate, it must meet the parameter of maximum downtime of 28.8 hours a year. There may be no spare power generators, humidity and air conditioning controllers, and raised floors for routing utility cables and pipes.
If maintenance is needed, the entire system should be shut down. Current requirements for Internet projects in terms of speed of repairs and availability have stepped far ahead, so the technical capabilities of such structures are severely outdated.
Tier 2 – Capacity reservations
In addition to all the features of the previous one, the data center is equipped with spare uninterruptible power supply (UPS) and cooling units, raised floors for cables and pipes. The equipment is organized according to the N+1 scheme. Therefore, the probability of emergency situations is significantly reduced. However, during maintenance or troubleshooting, hosting services will have to be suspended. The regulatory downtime for Tier II is less than 22 hours per year.
Tier II Tier is often found on the market. Such hosting services are characterized by affordable prices and are used in small businesses, for which a short-term hosting outage does not imply big losses.
Tier 3 – Maintenance works without shutdowns
Tier 3 data centers are the most common, they are the optimal solution for IT projects that cannot tolerate long downtime and possible outages. In this case, only one of the data center systems remains active in operating mode. The maximum downtime is reduced to 1.6 hours thanks to the redundancy of the entire infrastructure. This makes it possible to carry out maintenance of the hardware without a complete shutdown.
In this regard, Tier III can be considered as highly fault-tolerant, with a very high degree of availability. It is used by many modern online projects and web-services of government agencies.
Tier 4 – Maximum fault tolerance
Tier 4 is the most reliable class of data centers. Its distinctive feature is the 100% duplication of all infrastructure components according to the 2(N+1) scheme. In addition, the structural blocks of the Tier IV data center are divided into separate sections on different premises. This gives complete independence and provides an annual downtime limit of only 26 minutes.
Such numbers are essential for IT projects and services for which a potential outage is simply not tolerated. However, building and maintaining such data centers is very expensive, which has a direct impact on the cost of hosting. Moreover, for many organizations, such characteristics are superfluous.
For convenience of comparison, all levels of data centers are presented in the table:
Уровень надежности дата-центров | Резервирование оборудования | Распределение потоков | Возможность обслуживание без остановки ЦОД | Суммарный годовой простой, часов | Отказоустойчивость объекта, % |
Tier 1 | no | no | no | 28,8 | 99,671 |
Tier 2 | N+1 | yes | no | 22 | 99,749 |
Tier 3 | N+1 | yes | yes | 1,6 | 99,982 |
Tier 4 | 2(N+1) | yes | yes | 0,43 | 99,995 |
Which Tier to choose?
The choice of a data center should be based on the business goals. The key task for the client in this case is to find a balance between cost and acceptable indicators, because the higher the tier, the higher the hosting price. For critical infrastructure projects, where even the smallest failure may lead to serious losses, it is better to choose Tier 4 data centers. If you need high-quality and virtually trouble-free hosting – pick Tier 3 provider. Small projects can be hosted in the second tier. However, you need to keep in mind the possibility of some outages and backup your servers for a faster recovery.
It is worth mentioning that reliability is not the only criterion in determining the provider. The provider’s expertise, the speed of technical support and geographical location are also very important here. Melbicom network is located in 14 Tier III and IV data centers in Europe, Africa, MENA, Asia, and North America.
We are always on duty and ready to assist!
Blog

What you should know about IPv4 and IPv6
IT technology has become a vital part of people’s lives. According to Cisco analysts, in 2023 the number of Internet users will reach 5.3 billion people. At the same time, the number of computers and other devices connected to the World Wide Web will reach 29.3 billion units.
Smartphones, laptops, smart cookers and TV set-top boxes – an average Internet user has more than three devices connected to the network. They interact with each other via the Internet Protocol (or IP) technology, which routes and addresses data over an Internet connection. Currently, two versions of the protocol are used: IPv4 and IPv6. What are their features? What are the differences between IPv4 and IPv6? And why the Internet has not come to one of the solutions yet? We will try to answer these questions in this article.
Features of IPv4
If you need to send a letter to someone, you’ll have to know his or her exact address. This sounds perfectly logical: it is the only way the other person can surely receive the letter and send a reply. A similar principle is used for data exchange in the Internet. The Internet Protocol gives every device an address – a unique combination of numbers by which the device can be definitely identified. The fourth version of IPv4, created back in 1982, defines the rules of exchange and gives everyone a web address consisting of 32 bits. It contains four sections separated by a dot. For example: 192.168.1.103. The range of values for each section varies from 0 to 255. To separate local and external addresses, the protocol uses a special mask that is applied to the IP address and allows you to define its relation to the network system. In 32 bits of information, 4.29 billion different combinations can be encrypted.
As you can see from the analysis in the introduction, the number of such combinations within the IPv4 is significantly smaller than the number of network connections in the world. This problem became obvious back in the mid-90s, when Internet technology began to develop rapidly. Initially, in order to compensate the deficit, it was suggested to expand the number of available IPv4 addresses thanks to CIDR (Classless Inter-Domain Routing), and later NAT (Network Address Translation – translation of network IP and ports), which replaces the internal network IP to the public one. However, despite all attempts, the need for unique IP began to overtake the possibilities of the fourth architecture.
Advantages of IPv6
In response to the deficit of free IPv4 addresses, an improved version of the protocol was introduced in 1995, it was called IPv6. The main difference between IPv6 and IPv4 was that the entire 128 bits were allocated for address generation. The space of possible address combinations for IPv6 is 2^128, or 60 million addresses per person on Earth. It seems that this should be enough for a very long time.
Every IPv6 address, consists of four hexadecimal digits separated by colons into 8 groups. For example, ef05: 1db1: 12a5: 0200: 0230: a71e: 1363: 53e1. At the same time, due to the increased size, it is possible to encrypt much more additional parameters that increase the efficiency of the transmission. This format offers a number of significant advantages:
- A larger number of addresses, which is enough to assign every device a unique web address.
- There is no need for NAT and CIDR, which has a positive impact on the efficiency of VPN and p2p connections.
- Better Internet routing due to the ability to build a hierarchy within the available amount of address information. This reduces the cost of traffic distribution according to the routing table.
- Simplified Quality of Service (QoS) system, which identifies delay-sensitive packets.
- Header optimization helps you handle incoming and outgoing traffic more efficiently.
- High connection security due to the built-in IPSec network-level cryptographic security protocol.
- Better compatibility with mobile data transfer.
IPv6 vs IPv4 comparison table
To make the comparison of IPv6 and IPv4 easier to understand, all their main characteristics can be summarized in a table:
Description | IPv 4 | IPv 6 |
Address | The numerical sequence in 32b, divided into four parts by dots | Hexadecimal numeric sequence in 128b, divided into eight parts by colons |
Subnet mask | Used to determine the local network affiliation | Not available |
Header format | 12 fields | 8 fields |
Type of addressing | Broadcasting to all network members | Multicast |
Configuration | It is necessary to assign IP addresses and routes | Not required |
Security | Wasn’t designed by the architecture | IPSec security |
Availability | All providers | Limited list of provider |
Package size | 576 bytes. Fragmentation is optional. | At least 1280 bytes. No fragmentation needed. |
As you can see from the table comparing IPv6 and IPv4, version 6 has significant advantages. IT-giants like Amazon, Google and Facebook have been working on IPv6 for a long time. Many hosting providers also provide IP addresses in the new format. However, according to the data for 2020, the share of IPv6 compared to IPv4 in total traffic is only about 30%. And there are reasons for that
Difficulties of moving to IPv6
Not everyone rushes to switch to IPv6. In fact, the infrastructure, which emerged during the era of rapid development of web-technologies, is shaped under IPv4. The transition to a new Internet Protocol requires technical re-equipment and significant financial and time resources. It is necessary to buy and configure expensive hardware, update DNS, and make many other works to integrate – unfortunately, not all administrators fully understand how to properly serve IPv6. Due to the fact that the sixth version of the protocol is not backward compatible with the fourth one, Dual stack technology is needed to use two protocols simultaneously.
However, despite all the difficulties, the transition to IPv6 is just a matter of time. With spreading of computers, cell phones and other gadgets, requirements to their connection parameters are increasing. The v6 IP protocol not only extends the address space, but also simplifies Internet routing and increases its security.
We are always on duty and ready to assist!
Blog

DDoS attack protection methods
Even people who are far from the technical side of the Internet are familiar with the concept of a DDoS attack. We face its results in our lives regularly: websites hang up, applications glitch and fail to load pages, online games shows connection error messages. How do these attacks happen? How much does a business lose due to these attacks? Is it possible to protect your project from bot attacks? How to protect a server from DDoS? We will try to answer these questions in this article.
What is a DDoS?
A Distributed Denial Of Service Attack (or DDoS attack) is an attack on Internet resources in order to disable them and make them unavailable to visitors. In a nutshell, it is sending a huge amount of requests to a certain web resource over the Internet. As a result, the server can not handle this amount of incoming data and hangs. Another possible result is that the communication channel of this server is so clogged that requests from real users stop coming in, and the cost of paying for it exceeds reasonable limits. In all cases, access to the resource becomes unavailable.
A significant fraction of attacks occurs according to the following algorithm:
- Collecting analytics about the target to identify possible vulnerabilities and select the best scenario of the attack.
- Creation of a special network of bots (botnet) from infected or vulnerable devices, from which the attack will be performed.
- Simultaneous requests from multiple bots.
- Analysis of the results: whether the targets have been reached, whether additional data collection about the architecture and vulnerabilities is needed.
Sometimes an attack is organized without deploying botnets. The attacking computer sends requests while spoofing its own IP address with IP of the victim. All this causes much higher response volume, which is received by the victim’s server. DDoS attacks can also target a specific segment of a web service. “Smart” cyberattacks select individual resource-intensive segments and load them directly, leading to the failure of the entire web application.
There are many classifications of DDoS. The most common classification is based on the seven-layer Open Systems Interconnection (OSI) model of data network architecture:
- Network layer (L3): This kind of attacks targets network devices directly, such as routers, switches and switches. The attack uses the following protocols: IP OSPF, ICMP, IGMP, RIP, DVMRP PIM-SM, IPsec, IPX, DDP.
- Transport layer (L4): This is where hackers attack the servers themselves and some Internet services, such as gaming portals. The attack targets TCP and UDP protocols, and DP Lite, DCCP, SCTP, RUDP subprotocols.
- Application layer (L7): Here, attackers target imperfections in the programming code of web applications. The attack is performed via DNS, HTTP, HTTPS, etc.
How dangerous are DDoS attacks?
Modern companies are very dependent on information and communication technology. Almost every business can be called an IT project in a way. Digitalization has turned IT channels into critical points for data exchange with clients and partners, for internal communication and analytics. Failures of digital infrastructure, downtime of corporate portals lead to significant losses and damage to the reputation of entire corporations.
With the massive going online after the pandemic, not only banking, government, gaming, and e-commerce services can become a target for DDoS attacks. Small business websites, food deliveries, healthcare portals are also in danger now.
DDoS attacks have become a popular tool of hackers and malefactors: they are relatively easy to launch, inexpensive, and effective. The cost of organizing a single botnet attack starts from $50 per day. You can even find dedicated services on the darknet that provide DDoS services with ratings and cashback.
Possible losses of business owners is disproportionately higher than the costs of a DDoS attack itself. Even if you run a small sushi delivery shop, you can calculate that with an average check of $50 and 60 orders per day, you can lose $90 000 in revenue per month and at the same time you will receive significant damage to your business reputation. While for hackers, this attack will only cost about $1500.
Obviously, for large corporations, the cost of damage and risks are much higher. For example, the recent October disruption of Facebook, Instagram, and WhatsApp, when users reported problems with logging in, downloading, and accessing the platforms, cost Mark Zuckerberg $6.6 billion.
Protecting your website from DDoS: what can you do yourself?
We should mention that it is barely possible to create a full-fledged web protection by yourself. Attempting to fight back network raids on your own is complicated by the fact that a barrage of messages from bots can block communication channels even before your filtering system. There is not a single guaranteed free method of protecting your website from DDoS attack. Nevertheless, there is something you can still do:
- Capacity expansion. This is an expensive option, which is commonly used by large web resources to back up. Investments in bandwidth and computing resources allow more information to be processed at the same time.
- Router configuration. The traffic router can be configured to filter “garbage” traffic. The main disadvantage of this method is that it can be difficult to determine what traffic should be blocked and what should be allowed through. After all, requests from a botnet can come from different regions, have different content and even be masked to look like real people.
- Fine-tuning and regular software updates. Some DDoS attacks target specific vulnerabilities in the software code or operating system of the main server. Developers of good proprietary software products keep an eye out for new hacker techniques and fix problems in their next updates. The default OS and web server settings of Apache or Nginx often have significant limitations. This is especially true for the performance of servers running on Nginx and the Linux network stack. Also, if a third-party CMS (like WordPress, Joomla, Opencart) is used on a website, you should thoroughly read the optimization manual.
- Server status monitoring. It is strongly recommended to connect monitoring of key indicators: CPU load, percentage of RAM usage, number of visitors, and so on. So you will be able to detect unusual events on time and minimize possible losses.
DDoS protection services
If your bandwidth and computing resources are insufficient to repel attacks on your own, you can consider using DDoS protection solutions offered by hosting providers. What are the advantages of such IT solutions?
- Protection at all OSI layers. Self-protection methods are applicable if a cyberattack occurs at one particular layer. Specialized technologies comprehensively filter botnet traffic using several tools at once, which helps secure all layers L3-L7.
- You’ll get only a “clear” traffic. The protection system of hosting providers processes the incoming traffic before it reaches your server. All operations on “clearing” Internet traffic do not reduce your performance.
- Scalability. Since you don’t need to integrate protection into your own IT infrastructure, you don’t have to rebuild the architecture and buy more equipment when expanding. You just need to choose a more suitable tariff.
- Technical support during accidents. Attackers constantly invent sophisticated ways and approaches, and if your server becomes a target for a bot invasion, you won’t be left alone with the problem and you can count on professional help.
- Big capacities. Hosting companies that provide DDoS protection have a powerful dedicated internal infrastructure, designed specifically to clean up requests that can not be compared to the capabilities of a single project.
So what to do?
The impact of DDoS attacks on businesses is increasing proportionately with the degree of digitization of the economy and the growing “Internet addiction” of companies. For many projects, the failure of a website, even for one day, means huge losses. Attackers uses the newest technologies and discover new vulnerabilities. It looks like a real war between hackers and coders: who will outsmart whom. And to protect yourself in this war, it is better to seek for a professional IT-service.
Melbicom provides professional DDoS attack protection services. The distinctive feature of this service is charging regardless of the attack volume, 24-hour technical support, as well as combined (hardware/software) traffic verification mechanisms.