Absolute roughness of pipes (Table)
Surface roughness is a set of surface irregularities with relatively small steps along the base length.
The head loss along the length under turbulent conditions can be influenced by the roughness of the walls. By roughness we mean the presence of irregularities (protrusions and depressions) on any surface. During factory production of pipes, the roughness of their internal walls is irregular, both in height and location, and therefore cannot be characterized by one parameter. Despite this, in technical calculations a single parameter is chosen, namely the average height of the roughness protrusions; it is denoted by k (or Δ).
The absolute roughness Δ is the average height of the roughness protrusions.
Experiments have shown that with the same value of absolute roughness, its influence on the value of hydraulic resistance is different depending on the diameter of the pipe. Therefore, the relative roughness value Δ/d is introduced.
Relative roughness is the ratio of absolute roughness to the pipe diameter, i.e. Δ/d.
Table of absolute pipe surface roughness
Types of pipes and materials | Pipe surface condition, pumped medium and operating conditions | Roughness, Δ, mm |
Solid drawn from brass, copper, lead | Technically smooth | 0,0015—0,0100 |
Solid drawn aluminum | Technically smooth | 0,015—0,06 |
New | 0,02—0,10 | |
Cleaned after years of use | up to 0.04 | |
Solid drawn steel | Bitumenized | up to 0.04 |
Cogeneration for water steam and water in the presence of deaeration and chemical purification of running water | 0,10 | |
After one year of operation on the gas pipeline | 0,12 | |
After several years of operation in gas well pumping systems under various conditions | 0,04—0,20 | |
Steam lines for saturated steam and water heat lines with minor water leaks (up to 0.5%) and make-up deaeration | 0,20 | |
Pipelines for water heating systems, regardless of the power source | 0,20 | |
Oil pipelines for medium operating conditions | 0,20 | |
Moderately corroded | 0,4 | |
Presence of small scale deposits | 0,4 | |
Steam lines operating periodically (with downtime) and condensate lines with an open condensate system | 0,5 | |
Solid drawn steel | Air lines from piston and turbo compressors | 0,8 |
The same, after several years of operation in other conditions (corroded or with slight deposits) | 0,15—1,0 | |
Used water pipes | 1,2—1,5 | |
Presence of large scale deposits | 3,0 | |
The same, the surface of the pipes is in poor condition | ≥5,0 | |
New or old in best condition welded or riveted joints | 0,04—0,10 | |
New bitumen | 0,05 | |
Solid drawn steel | Used, bitumen partially dissolved, corroded | 0,10 |
Used, uniformly corroded | 0,15 | |
All-welded steel | Main gas pipelines after many years of operation | 0,5 |
The same, layered deposits | 1,1 | |
Presence of significant deposits | 2,0—4,0 | |
After 25 years of operation on the city gas pipeline; presence of uneven deposits of tar and naphthalene | 2,4 | |
The surface of the pipes is in poor condition; uneven coverage of connections | 5,0 |
10.2. Patterns of changes in the coefficient of hydraulic friction
The pressure loss along the length of the pipeline is usually found using formula (9.14). In this case, the main task is to determine the coefficient
hydraulic friction
. In general, the coefficient of hydraulic friction can depend on two dimensionless parameters - the number
Re
=
and
k / d
, i.e.
.
In Fig. 10.1 shows an experimental graph of the dependence of the coefficient
on the Reynolds number, on it the change in coefficient
is represented by a series of curves, each of which corresponds to a certain relative roughness, i.e. ratio k / d
.
Three regions can be distinguished in the graph: I - the region of hydraulically smooth pipes, corresponding to relatively low Reynolds numbers, II - the region of subquadratic resistance, III - the region of quadratic resistance. In the area of hydraulically smooth pipes the coefficient
depends on the Reynolds number, in the subquadratic region the coefficient
depends on the Re
and on the relative roughness, and in the region of quadratic resistance - only on the relative roughness.
Red
Rice. 10.1. Murin–Shevelev graph
Relay costs
The Internet is a best-effort network, which means that packets will be delivered if possible, but may also be dropped. Packet dropping is adjusted by the transport layer, in the case of TCP; there is no such mechanism for UDP, which means that either the application does not care if some parts of the data are not delivered, or the application itself does the retransmission over UDP.
Retransmission reduces useful performance for two reasons:
A. Some data needs to be sent again, which takes a long time. This introduces a delay that is inversely proportional to the speed of the slowest link in the network between the sender and the receiver (aka the bottleneck). b. Detecting that some data was not delivered requires feedback from the recipient to the sender. Due to propagation delays (sometimes called latency; caused by the finite speed of light in a cable), feedback may only be received by the sender with some delay, further slowing the transmission. In most practical cases, this is the most significant contribution to the additional delay caused by retransmission.
It is clear that if you use UDP instead of TCP and don't care about packet loss, you will of course get better performance. But for many applications, data loss is unacceptable, so this measurement is not meaningful.
There are some applications that use UDP to transfer data. One is BitTorrent, which can use either TCP or a protocol they developed called uTP, which emulates TCP over UDP but aims to be more efficient when using many parallel connections. Another transport protocol implemented over UDP is QUIC, which also emulates TCP and offers multiplexing of multiple parallel transmissions over a single connection and forward error correction to reduce retransmissions.
I'll discuss forward error correction a bit as it relates to your question about throughput. The naive way to implement this is to send each packet twice; in case one gets lost, the other still has a chance to be retrieved
This cuts the number of retransmissions in half, but also halves your useful performance since you are sending redundant data (note that the network or link layer bandwidth remains the same!). In some cases this is normal; especially if the delay is very high, for example on intercontinental or satellite channels
Moreover, there are some mathematical methods where you don't need to send a complete copy of the data; for example, for every n packets you send, you send another redundant packet, which is an XOR (or some other arithmetic operation) of them; if the extra one is lost, it doesn't matter; if one of the n packets is lost, you can recover it based on the redundancy of one and the other n-1. This way, you can configure the overhead introduced by forward error correction for whatever amount of bandwidth you can save.
What is this key feature in TCP that makes it much superior to UDP?
This is false, although it is a common misconception.
In addition to relaying data when necessary, TCP will also adjust the sending rate so that it does not cause packet drops by congesting the network. The tuning algorithm has been refined over decades and typically converges quickly to the maximum speed supported by the network (in effect, the bottleneck). For this reason, it is usually difficult to beat TCP in throughput.
With UDP there is no rate limit for the sender. UDP allows an application to send as much as it wants. But if you try to send more than the network can handle, some data will be deleted, which will reduce your throughput and also make the network administrator very angry at you. This means that sending UDP traffic at high speed is not practical (unless the target is a DoS network).
Some media applications use UDP, but the speed limits the sender's transmission to very little speed. This is typically used in VoIP or Internet radio applications where very little bandwidth but low latency is required. I believe this is one of the reasons for the misunderstanding that UDP is slower than TCP; this is not true, UDP can be as fast as the network allows.
As I said before, there are protocols such as uTP or QUIC implemented on top of UDP that provide similar performance to TCP.
What is a bit How is bit rate measured?
Bit rate is a measurement of connection speed. Calculated in bits, the smallest units of information storage, per 1 second. It was inherent in communication channels in the era of the “early development” of the Internet: at that time, text files were mainly transmitted on the global web.
Currently, the basic unit of measurement is 1 byte. It, in turn, is equal to 8 bits. Beginner users very often make a grave mistake: they confuse kilobits and kilobytes. This is where the confusion arises when a channel with a bandwidth of 512 kbps does not live up to expectations and produces a speed of only 64 KB/s. To avoid confusion, you need to remember that if bits are used to indicate speed, then the entry will be made without abbreviations: bit/s, kbit/s, kbit/s or kbps.
Shipping charges
The Internet is a best-effort network, which means that packets will be delivered if possible, but may also be dropped. Packet droplets are corrected by the transport layer, in the case of TCP; there is no such mechanism for UDP, which means that either the application does not care if some parts of the data are not delivered, or the application implements retransmission directly on top of UDP.
Retransmission reduces consumption for two reasons:
A. Some data needs to be sent again, which takes time. This introduces a delay that is inversely proportional to the speed of the slowest link in the network between the sender and the receiver (aka the bottleneck node). b. Detecting that some data was not delivered requires feedback from the recipient to the sender. Due to propagation delays (sometimes called latency, caused by the finite speed of light in the cable), feedback can only be received by the sender with some delay, further slowing down the transmission. In most practical cases, this is the largest contribution to the additional delay caused by retransmission.
Obviously, if you use UDP instead of TCP and don't care about packet loss, you will of course get better performance. But for many applications, data loss cannot be tolerated, so this measurement is meaningless.
There are some applications that use UDP to transfer data. One of them is BitTorrent, which can use either TCP or a protocol they created called uTP, which emulates TCP over UDP but tends to use many parallel connections more efficiently. Another transport protocol implemented over UDP is QUIC, which also emulates TCP and offers multiplexing of multiple parallel transmissions over a single connection and forward error correction to reduce retransmissions.
I'll discuss forward error correction a bit, since it's related to your question about throughput. The naive way to implement it is to send each packet twice; in case someone gets lost, the other still has a chance to get
This reduces the number of retransmissions by up to half, but also halves your revenue since you are sending redundant data (note that the network or link level bandwidth remains the same!). In some cases this is normal; especially if the latency is very high, for example on intercontinental or satellite channels
Also, there are some math methods where you don't have to send a complete copy of the data; for example, for every n packets you send, you send another reduntant, which is an XOR (or some other arithmetic operation) of them; if the extra one is lost, it doesn't matter; if one of the n packets is lost, you can recover it based on the redundant one and the other n-1. This way you can tune the overhead caused by forward error correction to whatever amount of bandwidth you can spare.