Summary of network programming and concurrent programming

Summary of network programming and concurrent programming

Software development architecture:

C/S architecture:

Client: Client

Server: server side

Advantages: Less network resources are occupied, and the use of software is more stable.

Disadvantages: After the server is updated, the client must also be updated. Accessing multiple servers requires downloading the corresponding software, which takes up a lot of hardware resources of the client computer.

B/S structure:

Browser: Browser

Server: server side

Server: 24-hour uninterrupted service

Client: To access different servers, you only need to enter different URLs in the browser, which occupies less hardware resources of the client, but occupies large network resources and becomes unstable when the network speed is slow.

1. Network programming:

1. Internet Protocol OSI seven-layer protocol

  • Application layer
  • Presentation layer
  • Session layer
  • Transport layer
  • Network layer
  • data link layer
  • Physical connection layer memory: the data should be transmitted on the network

-Physical connection layer

Send binary data based on electrical signals

-data link layer

1) Specify the grouping method of electrical signals

2) Must have a network card:

-mac address:

12-digit unique hexadecimal string: the first six digits are the manufacturer number, the last six digits are the serial number

-Ethernet protocol:

Communication within the same local area network:

​ Unicast: 1 to 1 roar

​ Broadcast: Many-to-many roar (there will be broadcast storm)

​ Cannot communicate across LANs.

-Network layer

ip: locate the location of the local area network

port: uniquely identifies an application on a computer.

arp protocol: get and resolve mac address into ip and port.

-Transport layer TCP, features: TCP protocol is called streaming protocol, if you want to communicate, you must establish a connection

#### 1.1 Three-way handshake of TCP protocol:

The client sends a connection establishment request to the server, the server replies to the received request and sends a request to establish a connection between the server and the client, and the client replies to the request to establish a connection, and the two-way channel is established.

#### 12. 4.waves of TCP protocol:

The server sends a disconnect request to the client, the client replies to the received request, and then checks whether it has data to send to the client, if not, it sends a disconnect request to the client, and the client replies with the disconnection information. , The client is disconnected from the server.

Two-way channel feedback mechanism: the client sends a request for data acquisition to the server, and the client sends a message to confirm the receipt. If the server does not return a message, the client will continue to send the request every other time. If the time is too long, the request is still not received. When the reply is reached, the request will be stopped.

1.2 UDP protocol


  • Data is not secure
  • No need to establish a two-way channel
  • The client sends data to the server without receiving the confirmation message from the server
  • Fast transmission speed
  • There will be no sticking problems

The difference between TCP and UDP:

TCP: It's like making a call

UDP: As if sending a text message

Application layer





Socket is used to write the socket client and server, and internally helps us encapsulate what we need to do with the 7-layer protocol.

3. Hand draw socket template

3.1 Server

import socket

server = socket.socket()

server.bind(('',6666)) The ip and port number in parentheses are in the form of a tuple

server.listen(6) semi-connection pool

conn,addr = server.accept()

data = conn.recv(1024) the binary bit length of the received data

conn.send('Message sent'.decode('utf-8'))

3.2 Client

import socket

client = socket.socket()




4.subprocess (understand)

Function: Create a pipeline to cmd through code, and send commands and receive the results returned by cmd.


import subprocess

obj = subprocess.Ponpen('cmd command', shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)

success = obj.stdout

error =

msg = success + error

5. Sticky package problem

1) Unable to determine the size of the data sent by the other party

2) In a short time, the interval is short, and the amount of data is small, the data will be packaged into one by default, and the data sent multiple times will be sent at once.

6. struct solves the sticky package problem

Primary version:

Pack the length of a data into a fixed-length header, struck.pack('i',len(data))

Used when the other party gets data

data = struct.unpack('i',headers)[0]

Note: How to pack must be unpacked in what way

Upgraded version:

First store the data in the dictionary, then package the dictionary and send it over: benefits, real data length, file description information, and the data sent is smaller.

dic = {


Descriptive information of the file


7. Upload large file data


dic = {

File size

file name


with open (file name,'rb') as f:

for line in f:



dic = {

File size

file name


init_recv = 0

with open(file name,'wb') as f:

while init_recv<File size:

data = conn.recv(1024)


init_recv += len(data)

8.socketserver (at this stage, understand)

Can support concurrency

import socketserver

Definition class

TCP: must inherit the BaseRequestHandler class

class MyTcpServer(socketserver.BaseRequestHandler)


Internally implemented server = socket.socket()



while True:

conn,addr = server.accept()


The handle of the parent class must be rewritten. This method will be called when the client connects

def handle(self):


while True:

​ request.recv(1024)

​ self.request.send()







8.1 UDP socket template:


import socket

server= socket.socket(type=socket_SOCK_DGRAM)


data,addr = server.refrom(1024)


import socket

client = socket.socket(type=socket.SOCK_DGRAM)

ip_port = (ip,port)


data,_ = client.refrom(1024)


2. concurrent programming

Multi-channel technology

Multi-channel: switch + save state

-Spatial reuse: support multiple programs to use

-Multiplexing in time: the program will be switched when encountering IO operations, and the program will be switched when it takes too long to occupy the CPU.

1. Concurrency and parallelism:

Concurrency: looks like running at the same time: multi-channel technology parallel: running at the same time in the true sense: processes under multiple cores: a process is a resource unit, and each process created will generate a name space and occupy memory resources. A program is a bunch of code process is a process of running a bunch of code

2. Process scheduling:

Time slice rotation method: 10 processes, divide the fixed time into 10 equal parts, and allocate them to each process

The level feedback queue time is short, the highest execution authority is placed in the first level queue, and the longer time is placed in the following level 1: Level 2: Level 3:

3. 3.states of the process:

Ready state: Create multiple processes and queue them to run. Running state: Process starts running, ends, blocking. Blocking state: Enters into IO operation. After blocking state, it enters the ready state and takes too long to take the CPU time.

4. Synchronous and asynchronous:

Synchronous and asynchronous is the way to submit tasks. Synchronous refers to synchronous submission, which is serial. One task can only be submitted for execution after another task; asynchronous refers to asynchronous submission, multiple tasks can run concurrently

5. Blocking and non-blocking

Blocking: blocking state non-blocking: ready state, running state

Synchronous and asynchronous, blocking and non-blocking are different concepts and cannot be confused. Waiting is not necessarily blocking, it is possible that a certain task takes too much time on the CPU, so they are not the same concept.

Maximize CPU usage: reduce unnecessary IO operations as much as possible

### 6. Two ways to create a process 1. p = Process() p.start() submits the task of creating a process to the operating system p.join() p.daemon = True must be placed in front of start(), otherwise Error 2. class MyProcess(Process): def run(self):

​ The process of the task

​ p = MyProcess()

​ p.daemon = True must be placed before start(), otherwise an error will be reported

​ p.start()

​ p.join()

7. Two conditions for the recovery process:

​ 1. Call join to let the child process end after the child process ends. 2. The main process ends normally

### 8. Zombie process and orphan process Zombie process: any process whose pid number still exists after the child process ends, the parent process has ended

Orphan process: the main process has ended, the child process is still running

Daemon: As long as the main process ends, all child processes that add the daemon must end

9. Mutex lock:

Ensure data security from multiprocessing import Lock mutex = Lock() mutex = acquire() mutex = release()

10. Queue

from multiprocessing import Queue q = Queue(5) q.put() q.put_nowait() q.get() q.get_nowait()

Stack: LIFO

10.1 IPC inter-process communication

​ Inter-process data is isolated​ Queue allows inter-process communication​ Put one program in the queue, and another program gets it from the queue to achieve data interaction between processes.

10.2 Producer and Consumer Model

Producer: production data consumer: use data

In order to ensure balance: Producers throw data into the queue through the queue, and consumers remove data from the data


11.1 What is a thread

​ Process is the unit of resources​ Thread is the unit of execution

​ When creating a process, it will bring its own thread (main thread)

11.2 Advantages and disadvantages of processes and threads

​ Process: ​ Advantages: computationally intensive programs under multiple cores​ Disadvantages: overhead resources are higher than threads

​ Thread; advantage: small resource consumption​ IO intensive, improve efficiency​ Disadvantage: can not use multi-core advantages

Data is shared between threads

12. Global Interpreter Lock

In CPython, the global interpreter lock (GIL) is a mutual exclusion lock that can prevent simultaneous (parallel) execution of multiple threads in a process. Locking is necessary, mainly because CPython's memory management is not thread-safe. GIL exists to ensure thread safety .

### 13. Deadlock phenomenon

The so-called deadlock: refers to the phenomenon of two or more processes or threads waiting for each other due to resource contention during the execution process. If there is no external force, they will not be able to advance. At this time, the system is said to be in a deadlock state or the system has a deadlock. These processes that are always waiting for each other are called deadlock processes.

14. Recursive lock

Solve the deadlock phenomenon mutex1, mutex2 = RLock() Only when the count of this lock is zero will it be used by the next person

15. Semaphore

​ is also a lock, which can be used by multiple people and multiple users​ sm = Semaphore(5)

16. Thread queue: to ensure the safety of data between threads

import queue FIFO: first-in first-out queue queue.Queue()

​ LIFO: Last-in, first-out queue​ queue.LifoQueue() ​ Priority queue​ queue.PriorityQueue()

17.event event

Can control the execution of threads, let some threads control the execution of other threads

In order to control the number of processes and threads created, the thread pool and process pool ensure the normal operation of the hardware.

from concurrent.futures import ProcessPoolExecutor,TreadPoolExecutor

pool1 = ProssPoolExecutor() defaults to the number of CPUs

pool2 = ThreadPoolExecutor() defaults to the number of CPUs*5

18. Callback function

pool.submit(function name, parameter).add_done_callback(callback function name)

Note: The callback function must receive a parameter, and this parameter is the return value of the first function.

res1 = res.result() Get the return value.

19. Coroutine

Coroutine: realize concurrency under single thread, not any unit, it is the name of programmer YY.

The advantages of coroutines: save memory resources and further improve CPU utilization (only in IO-intensive programs have advantages)

High concurrency: multi-process + multi-thread + coroutine

19.1 Manual creation of the coroutine:

Manually realize the switch + save state:

Yield+next: The suspension of the program by yield will not be recognized as an IO operation by the operating system

19.2 Using gevent to achieve concurrency in a single thread

from gevent import monkey

monkey, patch_all() monitors whether there is an IO operation

from gevent import spawn, joinall realizes switching + saving state

s1 = spawn (task 1)

s2 = spawn (task 2) monitor whether there is an IO operation

joinall([s1,s2]) waits for the completion of all programs and executes the following program.

20.IO model (understand)

Blocking IO

Non-blocking IO

Multiplexed IO

Asynchronous IO

Reference: Network Programming and Concurrent Programming Summary-Cloud + Community-Tencent Cloud