Distributed splicing processor selection tips

Hits : Update:2019-03-15 11:20

The large-screen splicing industry can be described as a hundred flowers. With the spread splicing system in the industry in 2014, various manufacturers have spared no effort to introduce various types of distributed splicing processors. Even some vendors that are new to distributed do not have a special understanding of the concept of distributed splicing processors or technical limitations, so the distributed splicing processors are divided into pure hardware, embedded, PC architecture and so on. In fact, such a classification is purely a way of misleading customers. The so-called distributed technology must be a node-based independent processing architecture. It has no distance limitation on the application and does not depend on the system. In the process of building a system built by a distributed splicing processor, both the hardware nodes are used for encoding and decoding, and the embedded server is used to control the entire system. The embedded server mainly ensures the control system. No virus, security, stability. The so-called PC architecture is actually an extension of the traditional centralized, not a distributed splicing system in the fundamental sense. So it is also not a kind of distributed splicing processor.

    So in the face of the constantly emerging manufacturers of distributed splicing processors, how should users choose to not fall into the trap of various manufacturers' rhetoric? Here are a few guidelines for users' reference.

   First: product technology maturity

  

The distributed splicing processor was officially available in just a few short years. Therefore, only after a long period of research and development and technology accumulation will the products mature. Its high-tech threshold is that its algorithms and coding techniques are prohibitive for many manufacturers.

     For example, a distributed splicing processor undergoes three stages of compressing codec technology on video transmission:

    In the first phase, due to the limitations of the coding technique, video transmission uses a completely uncompressed way to transmit the original stream. This method requires a lot of bandwidth, and a single 1080P@60 frame video requires a bandwidth of up to about 3G.

    In the second phase, as the application requirements change, the uncompressed stream mode can no longer satisfy a large number of video transmissions. Therefore, the hybrid code stream compression method based on the conventional compression (such as MPEG2, MPEG4, etc.) added to its own algorithm was born. At this time, a single 1080P@60 frame of video is compressed and only requires a few hundred megabytes of bandwidth. Undoubtedly, it basically solved the certain needs of system expansion in a certain period of time.

    The third stage, but with the birth of big data and the needs of social development, the demand for massive signal access to small bandwidth is becoming more and more urgent. The H.264 international standard compression coding algorithm is the best compression algorithm to solve small bandwidth. It not only realizes a small bandwidth (as low as several tens of megabytes or even several megabytes), it can support the massive signal expansion of the system, and also guarantees the high-definition restoration of the original image quality. The advantages of the H.264 algorithm make its decoder chip widely used in satellite HD set-top boxes (such as the United States and Europe), and it has become the standard configuration of the HD set-top box SOC chip.

    Second: the stability of the system

    

Undoubtedly, the stability of the system is one of the core elements to judge the quality of the processor product. The criteria for stability assessment are broader, but usually include three aspects:

    First, synchronization, large-screen splicing processor is mainly to process the access signal and achieve the upper screen display, so the synchronization requirements of the upper screen signal is extremely strict, it mainly reflects that no video can be generated on the video, otherwise It may cause tearing of the image on the upper screen, which seriously affects the use effect.

    Second, real-time, the core application of the large-screen splicing system is to require real-time transmission of video signals, and real-time display on the screen to provide timely, fast and effective command and decision-making basis. This provides extremely important conditions for the timely deployment of major accidents in order to recover losses. Just as major disasters such as earthquakes and landslides occur, the monitoring party can grasp the disaster situation and deploy disaster relief to the real-time site at the first time of the disaster. It can be said that time is life and property safety.

    Third, the system will not collapse as a whole. For large-screen systems, if the entire system fails to operate due to the failure of some modules, this means a catastrophe for the user. The real distributed system uses distributed splicing node processing, which only processes its own signals and has no effect on other nodes of the system. When a single node fails, the system has no effect and can still maintain normal operation. Instead of replacing the damaged node, the access signal can be arbitrarily scheduled and processed.

  Third: system compatibility

    

Devices connected to large-screen splicing systems are often required to be versatile. Therefore, a good distributed splicing processor must have an open principle. Not only should it support the access of almost all types of signals, but also support the docking of other business platforms, otherwise it will easily lead to the system crashing due to compatibility problems and can not reflect the superiority of distributed splicing processors over traditional processors.

    For example, in a large-screen splicing system, the most access devices are cameras, even thousands. However, in many current systems, due to signal access expansion, equipment updates in different periods, the brand of cameras is also ever-changing. The proprietary protocol issues of the cameras of various manufacturers, resulting in incompatibility with each other and affecting the stability of the entire system.

    Another example, because the system upgrade requires access to some external platforms, these require a large screen splicing system with an excellent compatibility, otherwise it is difficult to maintain the overall stable operation of the system. Therefore, when customers choose a distributed splicing processor, they should first understand whether it has good openness, whether it can dock different platforms, and be compatible with different devices. These are the criteria for measurement.

   Fourth: the ease of use of the system

    The ease of use of the system usually includes the convenience of the operation end, and the user can use it with peace of mind. But as technology advances and demand increases, ease of operation is no longer the only criterion for ease of use. Visual operations, WYSIWYG, have become an important criterion for users' good operating experience. In the era of Internet +, the use of Internet technology to achieve operational linkage between the operating client and the display terminal for visual linkage control and real-time preview echo of the access signal should be the standard configuration of advanced distributed splicing processors.

    Fifth: the scalability of the system

    

For highly platformized large-screen splicing systems, scalability considerations are necessary. Because in practical applications, it is often overlooked that as the business scales up and various applications change, the system needs to be continuously expanded to meet new demands. Frequently, due to poor consideration of the previous project, the existing system can not meet the new requirements, so that the original system needs to be abandoned and rebuilt. This waste of manpower and financial resources is unacceptable to users.

     The true distributed splicing processor, the system scalability is unlimited, it does not exist only how many signals are allowed to access how much signal output. When the system scale needs to be increased, only a small number of nodes need to be added to complete the expansion. There is no need to scrap the original system equipment, and there is no complicated construction process. Everything is simple and easy.


    Sixth: system security


    In the era of big data, the security and confidentiality of information has become the consensus of the whole society. In particular, it is a large-screen splicing system for corporate, government, military and other functional departments. Therefore, when selecting a distributed splicing processor, it must ensure that its products have the core technology of self-developed and the underlying security technology, which fundamentally guarantees the core interests of users.