Client: Client
Server: server side
Advantages: Less network resources are occupied, and the use of software is more stable.
Disadvantages: After the server is updated, the client must also be updated. Accessing multiple servers requires downloading the corresponding software, which takes up a lot of hardware resources of the client computer.
Browser: Browser
Server: server side
Server: 24-hour uninterrupted service
Client: To access different servers, you only need to enter different URLs in the browser, which occupies less hardware resources of the client, but occupies large network resources and becomes unstable when the network speed is slow.
-Physical connection layer
Send binary data based on electrical signals
-data link layer
1) Specify the grouping method of electrical signals
2) Must have a network card:
-mac address:
12-digit unique hexadecimal string: the first six digits are the manufacturer number, the last six digits are the serial number
-Ethernet protocol:
Communication within the same local area network:
Unicast: 1 to 1 roar
Broadcast: Many-to-many roar (there will be broadcast storm)
Cannot communicate across LANs.
-Network layer
ip: locate the location of the local area network
port: uniquely identifies an application on a computer.
arp protocol: get and resolve mac address into ip and port.
-Transport layer TCP, features: TCP protocol is called streaming protocol, if you want to communicate, you must establish a connection
#### 1.1 Three-way handshake of TCP protocol:
The client sends a connection establishment request to the server, the server replies to the received request and sends a request to establish a connection between the server and the client, and the client replies to the request to establish a connection, and the two-way channel is established.
#### 12. 4.waves of TCP protocol:
The server sends a disconnect request to the client, the client replies to the received request, and then checks whether it has data to send to the client, if not, it sends a disconnect request to the client, and the client replies with the disconnection information. , The client is disconnected from the server.
Two-way channel feedback mechanism: the client sends a request for data acquisition to the server, and the client sends a message to confirm the receipt. If the server does not return a message, the client will continue to send the request every other time. If the time is too long, the request is still not received. When the reply is reached, the request will be stopped.
Features:
The difference between TCP and UDP:
TCP: It's like making a call
UDP: As if sending a text message
Application layer
ftp
http
http+ssl
Socket is used to write the socket client and server, and internally helps us encapsulate what we need to do with the 7-layer protocol.
import socket
server = socket.socket()
server.bind(('127.0.0.1',6666)) The ip and port number in parentheses are in the form of a tuple
server.listen(6) semi-connection pool
conn,addr = server.accept()
data = conn.recv(1024) the binary bit length of the received data
conn.send('Message sent'.decode('utf-8'))
import socket
client = socket.socket()
client.connect(('ip',port))
client.send()
client.recv(1024)
Function: Create a pipeline to cmd through code, and send commands and receive the results returned by cmd.
usage:
import subprocess
obj = subprocess.Ponpen('cmd command', shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
success = obj.stdout
error = obj.stderr.read()
msg = success + error
1) Unable to determine the size of the data sent by the other party
2) In a short time, the interval is short, and the amount of data is small, the data will be packaged into one by default, and the data sent multiple times will be sent at once.
Primary version:
Pack the length of a data into a fixed-length header, struck.pack('i',len(data))
Used when the other party gets data
data = struct.unpack('i',headers)[0]
Note: How to pack must be unpacked in what way
Upgraded version:
First store the data in the dictionary, then package the dictionary and send it over: benefits, real data length, file description information, and the data sent is smaller.
dic = {
'data_len':10000,
Descriptive information of the file
}
Client
dic = {
File size
file name
}
with open (file name,'rb') as f:
for line in f:
client.send(line)
Server
dic = {
File size
file name
}
init_recv = 0
with open(file name,'wb') as f:
while init_recv<File size:
data = conn.recv(1024)
f.write(dara)
init_recv += len(data)
Can support concurrency
import socketserver
Definition class
TCP: must inherit the BaseRequestHandler class
class MyTcpServer(socketserver.BaseRequestHandler)
-handle
Internally implemented server = socket.socket()
server.bind(('127.0.0.1',6666))
server.listen(5)
while True:
conn,addr = server.accept()
print(addr)
The handle of the parent class must be rewritten. This method will be called when the client connects
def handle(self):
print(self.client_address)
while True:
request.recv(1024)
self.request.send()
TCP:
SOCK_STREAM
conn.recv
UDP:
SOCK_DGRAM
server.refrom()
Server:
import socket
server= socket.socket(type=socket_SOCK_DGRAM)
server.bind((ip,port))
data,addr = server.refrom(1024)
Client
import socket
client = socket.socket(type=socket.SOCK_DGRAM)
ip_port = (ip,port)
client.sendto(data,ip_port)
data,_ = client.refrom(1024)
print(data)
Multi-channel technology
Multi-channel: switch + save state
-Spatial reuse: support multiple programs to use
-Multiplexing in time: the program will be switched when encountering IO operations, and the program will be switched when it takes too long to occupy the CPU.
Concurrency: looks like running at the same time: multi-channel technology parallel: running at the same time in the true sense: processes under multiple cores: a process is a resource unit, and each process created will generate a name space and occupy memory resources. A program is a bunch of code process is a process of running a bunch of code
Time slice rotation method: 10 processes, divide the fixed time into 10 equal parts, and allocate them to each process
The level feedback queue time is short, the highest execution authority is placed in the first level queue, and the longer time is placed in the following level 1: Level 2: Level 3:
Ready state: Create multiple processes and queue them to run. Running state: Process starts running, ends, blocking. Blocking state: Enters into IO operation. After blocking state, it enters the ready state and takes too long to take the CPU time.
Synchronous and asynchronous is the way to submit tasks. Synchronous refers to synchronous submission, which is serial. One task can only be submitted for execution after another task; asynchronous refers to asynchronous submission, multiple tasks can run concurrently
Blocking: blocking state non-blocking: ready state, running state
Synchronous and asynchronous, blocking and non-blocking are different concepts and cannot be confused. Waiting is not necessarily blocking, it is possible that a certain task takes too much time on the CPU, so they are not the same concept.
Maximize CPU usage: reduce unnecessary IO operations as much as possible
### 6. Two ways to create a process 1. p = Process() p.start() submits the task of creating a process to the operating system p.join() p.daemon = True must be placed in front of start(), otherwise Error 2. class MyProcess(Process): def run(self):
The process of the task
p = MyProcess()
p.daemon = True must be placed before start(), otherwise an error will be reported
p.start()
p.join()
1. Call join to let the child process end after the child process ends. 2. The main process ends normally
### 8. Zombie process and orphan process Zombie process: any process whose pid number still exists after the child process ends, the parent process has ended
Orphan process: the main process has ended, the child process is still running
Daemon: As long as the main process ends, all child processes that add the daemon must end
Ensure data security from multiprocessing import Lock mutex = Lock() mutex = acquire() mutex = release()
from multiprocessing import Queue q = Queue(5) q.put() q.put_nowait() q.get() q.get_nowait()
Stack: LIFO
Inter-process data is isolated Queue allows inter-process communication Put one program in the queue, and another program gets it from the queue to achieve data interaction between processes.
Producer: production data consumer: use data
In order to ensure balance: Producers throw data into the queue through the queue, and consumers remove data from the data
Process is the unit of resources Thread is the unit of execution
When creating a process, it will bring its own thread (main thread)
Process: Advantages: computationally intensive programs under multiple cores Disadvantages: overhead resources are higher than threads
Thread; advantage: small resource consumption IO intensive, improve efficiency Disadvantage: can not use multi-core advantages
Data is shared between threads
In CPython, the global interpreter lock (GIL) is a mutual exclusion lock that can prevent simultaneous (parallel) execution of multiple threads in a process. Locking is necessary, mainly because CPython's memory management is not thread-safe. GIL exists to ensure thread safety .
### 13. Deadlock phenomenon
The so-called deadlock: refers to the phenomenon of two or more processes or threads waiting for each other due to resource contention during the execution process. If there is no external force, they will not be able to advance. At this time, the system is said to be in a deadlock state or the system has a deadlock. These processes that are always waiting for each other are called deadlock processes.
Solve the deadlock phenomenon mutex1, mutex2 = RLock() Only when the count of this lock is zero will it be used by the next person
is also a lock, which can be used by multiple people and multiple users sm = Semaphore(5)
import queue FIFO: first-in first-out queue queue.Queue()
LIFO: Last-in, first-out queue queue.LifoQueue() Priority queue queue.PriorityQueue()
Can control the execution of threads, let some threads control the execution of other threads
In order to control the number of processes and threads created, the thread pool and process pool ensure the normal operation of the hardware.
from concurrent.futures import ProcessPoolExecutor,TreadPoolExecutor
pool1 = ProssPoolExecutor() defaults to the number of CPUs
pool2 = ThreadPoolExecutor() defaults to the number of CPUs*5
pool.submit(function name, parameter).add_done_callback(callback function name)
Note: The callback function must receive a parameter, and this parameter is the return value of the first function.
res1 = res.result() Get the return value.
Coroutine: realize concurrency under single thread, not any unit, it is the name of programmer YY.
The advantages of coroutines: save memory resources and further improve CPU utilization (only in IO-intensive programs have advantages)
High concurrency: multi-process + multi-thread + coroutine
Manually realize the switch + save state:
Yield+next: The suspension of the program by yield will not be recognized as an IO operation by the operating system
from gevent import monkey
monkey, patch_all() monitors whether there is an IO operation
from gevent import spawn, joinall realizes switching + saving state
s1 = spawn (task 1)
s2 = spawn (task 2) monitor whether there is an IO operation
joinall([s1,s2]) waits for the completion of all programs and executes the following program.
Blocking IO
Non-blocking IO
Multiplexed IO
Asynchronous IO