King of cunning! Mourinho will do whatever it takes to reach another new height.

Mourinho has a wide range of highlights in his experience of coaching some top clubs in Europe. However, his most famous night happened in the Champions League in 2010/11, when two mysterious red cards seemed to benefit Real Madrid in the following rounds. Real Madrid’s qualification from Group G has been stabilized, and Mourinho tried to take action in the last game of the group stage against Ajax. Mourinho incarnated as a master of shuffling, clearly instructed Ramos and Alonso, and took the initiative to get two yellow cards and was sent off.

After Ramos and Alonso are sent off, they will be suspended for one game, but they will remain innocent in the most important knockout. Sure enough, Alonso completed this task in an efficient way and took the initiative to apply for a red card and was sent off. Ramos then received a second yellow card. As expected, the referee Craig Thompson did what was expected and sent off the two players. Then, Ramos walked up to the referee and shook hands with him, which made it more obvious that he was accepting Mourinho’s instructions.

Although Mourinho’s bold decision achieved the expected results, it also brought some adverse side effects to Real Madrid. UEFA ordered Real Madrid to pay a huge fine, Mourinho was suspended for two games and fined 40,000 euros, and the club was also fined 120,000 euros. This legendary coach who calls himself "a special one" makes UEFA very angry! Despite his cunning tricks, Mourinho felt the pain of facing his arch-rival Barcelona not once, but twice that season. First of all, Barcelona performed better in the two rounds of the Champions League semi-final and eliminated Real Madrid.

Under the leadership of coach Guardiola, Barcelona also easily won the La Liga championship with a 4-point advantage. Mourinho finally left Real Madrid in 2013 and returned to Chelsea, starting his second coaching career. After winning the Premier League title again at Stamford Bridge, the 60-year-old handsome man’s worth and reputation gradually declined. Before he took charge of Rome in 2021, his teaching achievements at Manchester United and Tottenham Hotspur were unsatisfactory and mixed.

Beta thinks: During coaching Real Madrid, Mu Shuai will do whatever it takes to reach another new height! Mourinho is not only a special one, but also a veritable king of cunning. His unique personal charm attracts countless fans! I hope he can go further in Rome!

So for this news, fans, do you have anything to say? If you like this article, welcome to pay attention to Beta and chat with the stars and the ball game.

"Have dinner with me and I’ll give you $300,000!" Football superstars fall into the trap of Chinese aunts.

"Have dinner with me and I’ll give you $300,000." In 2003, a female fan in China paid a lot of money to invite Ronaldo to dinner. Ronaldo was so happy that he didn’t know that he had fallen into someone else’s trap.

Speaking of Ronaldo, even those who don’t know football, I believe they have heard his name. Ronaldo’s full name is ronaldo luiz nazario de lima. If you mention Ronaldo’s achievements in football, you can even introduce him day and night without interruption. In addition, Ronaldo also has many nicknames, such as Ronnie, Fei Luo and Da Luo. Ronaldo was born on September 18th, 1976. When he was a child, his family was very poor, but it was his love for football that kept him from giving up this career, and he eventually grew up to be the king of football in the world.

In the growing experience of Ronaldo, Ronaldo can be said to have dealt with all kinds of people. Because of his high popularity, Ronaldo was still relatively strong in protecting his brand awareness when attending various activities, but what Ronaldo didn’t expect was that he actually stumbled in the hands of a China woman.

To mention this story, we have to look back to 20 years ago, that is, in 2003, when Ronaldo came to China for an exchange competition. After arriving in China, Ronaldo was quickly impressed by the enthusiasm of China fans. The scene of the game was surrounded by a sea of fans, all of whom shouted their names.

After the game, Ronaldo’s assistant told him that there was a female boss in China who wanted to have dinner with him. At first, Ronaldo refused. After all, his time is precious, but then the assistant said that the female boss was willing to pay $300,000 as a reward as long as he agreed to have dinner with her.

Hearing this, Ronaldo instantly felt very interested. He could get 300,000 yuan just by eating a meal. Why not go?

Before departure, Ronaldo specifically explained to his assistant that he must show the other party that he is just going to have dinner. If it involves endorsement or other things, he needs to talk about the price. After all, it is impossible to endorse a product casually, just 300,000.

After arriving at the scene, the female boss also made it clear that 300,000 yuan is only the cost of eating and does not involve endorsement. Ronaldo let his guard down. There were not only the female boss, but also many children who lined up to send flowers to Ronaldo.

The female boss also took out a jersey, expressing the hope that Ronaldo could wear this jersey to accept the welcome of the children, and told Ronaldo that this jersey was carefully designed by the children. Ronaldo then put on a special jersey, not only accepted the children’s flowers at the scene, but also performed a superb skill for the children at the scene.

Next, I entered the stage of eating. The whole process of eating was very relaxed, and Ronaldo didn’t feel any discomfort. After dinner, the female boss is also very generous to transfer 300,000 yuan on the spot.

At this time, Ronaldo still felt that the money was very easy to earn, and the female boss in China was really generous.

After Ronaldo returned to China, he forgot all about it because of training and competition. It was not until a long time later that Ronaldo heard the news of his endorsement of products in China from his friends. At this time, Ronaldo felt very puzzled that he did not endorse any products in China.

Only after watching the relevant advertising videos did Ronaldo recall the experience of having dinner with the female boss in China. Then Ronaldo also entrusted a lawyer to take the female boss to court.

The female boss in the story is Jiang Peizhen, the helm of Golden Voice. Jiang Peizhen was in charge of a candy factory at first, but the candy factory soon closed down due to poor management. In order to solve the dilemma, Jiang Peizhen found Wang Yaofa because she heard that Wang Yaofa had a formula in his hand. After being authorized by Wang Yaofa, Jiang Peizhen formally established Guangxi Golden Voice.

Relying on Ronaldo’s fame, Golden Voice quickly swept across the country, and the sales volume showed a surge in a short period of time. Jiang Peizhen has also become a myth in the industry, but this luck didn’t last long. Finally, in 2009, Golden Voice still had financial problems, and Jiang Peizhen was also included in the list of dishonesty.

Digital Economy Empowering and High-quality Development The 5th International Financial Science and Technology Forum opened in Chengdu

At present, financial technology has become a new kinetic energy of the economy, and it is one of the indicators to measure the economic development level of various countries. How financial technology can empower high-quality economic development has also become a hot topic of global concern.

On November 5th, the 5th International Financial Science and Technology Forum opened in Wenjiang District, Chengdu. More than 150 top guests from the world’s political, industrial, academic and research circles once again gathered in Chengdu to analyze the new direction, new track, new trend and new path of China’s economic, financial and technological development around the theme of "digital economy enabling high-quality development". The forum will inject financial power into promoting high-quality economic and social development through more than 10 activities.

The Red Star reporter learned that this two-day forum was sponsored by Southwestern University of Finance and Economics, Chengdu Local Financial Supervision Administration and Wenjiang District People’s Government of Chengdu, and hosted by the School of Finance of Southwestern University of Finance and Economics and China Institute of Finance, Southwestern University of Finance and Economics International Joint Laboratory of Financial Technology, and Southwestern University of Finance and Economics Sichuan Key Laboratory of Financial Intelligence and Financial Engineering.

Vision: Financial Technology Helps Digital China Construction

Today, China’s economic strength has achieved a historic leap, and its total economic output ranks second in the world.

High-quality development has become the primary task for China to build a socialist modernized country in an all-round way, and we must persist in putting the focus of economic development on the real economy.

After several rounds of discussions, the participating experts agreed that financial technology, with its characteristics of integration, accuracy, intersection and openness, has become the key to providing financial support, inciting the capital market and technological innovation, effectively serving the development of real economy and digital economy, helping to strengthen advanced manufacturing industry, realizing high-quality economic development and speeding up the construction of a "manufacturing power" and "digital China".

2022 is a year in which China’s digital economy is fully developed. During the "14th Five-Year Plan" period, China’s digital economy turned to a new stage of deepening application, standardizing development, and universal sharing.

Experts attending the meeting said that, based on this background, this forum is held under the new development pattern of digital economy empowerment. Global financial experts jointly review the process of financial technology innovation, discuss the present situation of financial technology development, and look forward to the wonderful prospect of technology in helping financial service entities, preventing financial risks, and building a digital China and a green China, which is of great significance for forming a wider, wider and deeper opening-up pattern.

Four highlights: groundbreaking technology gives birth to new formats

This forum is carefully prepared and boldly innovated on the basis of previous sessions, and this year presents four new points of view.

The first highlight is that a series of pioneering financial technology systems and platforms were released for the first time at the opening ceremony on November 5th, including AI engineering KubeAI platform, Quant Plus quantitative analysis platform, enterprise risk intelligent identification and early warning system, etc. This is the first time that Southwestern University of Finance and Economics has shown the technical "hard power" of financial universities to the industry, which will bring new products, new models and new formats to the financial technology industry and help enterprises to transform digitally.

The second highlight is that the 5th Chengdu August 80 Global Financial Technology Product Design and R&D Competition officially started, and the competition teams of eight top universities in the world gathered in Chengdu again. The competition will further deepen Industry-University-Research’s cooperation, focus on "new finance and economics" talent training, innovate and standardize the contents of the competition, and innovate the talent training methods.

At the same time, at the Digital Economy Empowerment Financial Technology Innovation Forum held on the same day, the founders of financial technology companies such as Bingjian Technology, Daoke DaoCloud and Kuanbang Technology started a dialogue, focusing on the promotion of enterprise credit evaluation by artificial intelligence technology, AI empowerment investment and other topics to explore the new direction of digital economy development.

In addition, on November 6th, a headmaster’s forum will be held, which will be changed from a closed meeting to an open meeting for the first time. Southwestern University of Finance and Economics will set up a platform to invite the principals and deans of 16 mainstream universities in China to discuss and share new modes, new experiences and new methods of talent cultivation in universities.

Red Star Journalist Wu Huayu According to Wenjiang District

Editor Chai Chang

(Download Red Star News, and report the prize! )

What is the concept and meaning of big data?

"Big data" is a data set with a particularly large volume and data categories, and such data sets cannot be captured, managed and processed by traditional database tools. "Big data" first refers to data volumes? Large refers to a large data set, usually in 10TB? However, in practical application, many enterprise users put multiple data sets together, which has formed PB-level data volume; Secondly, it refers to the large variety of data, which comes from a variety of data sources, and the types and formats of data are increasingly rich. It has broken through the previously defined structured data category, including semi-structured and unstructured data. Secondly, the speed of data processing is fast, and the real-time processing of data can be achieved even when the amount of data is huge. The last feature is the high authenticity of data. With the interest of new data sources such as social data, enterprise content, transaction and application data, the limitations of traditional data sources have been broken, and enterprises increasingly need effective information power to ensure their authenticity and security.

Data collection: ETL tools are responsible for extracting data from distributed and heterogeneous data sources, such as relational data and flat data files, into the temporary middle layer, cleaning, converting and integrating them, and finally loading them into data warehouses or data marts, which become the basis of online analysis and data mining.

Access to data: relational database, NOSQL, SQL, etc.

Infrastructure: Cloud storage, distributed file storage, etc.

Data processing: NLP (NaturalLanguageProcessing) is a subject that studies the language problems of human-computer interaction. The key to natural language processing is to make computers "understand" natural language, so natural language processing is also called NLU (NaturalLanguage Understanding), also known as Computational Linguistics. On the one hand, it is a branch of language information processing; on the other hand, it is one of the core topics of artificial intelligence.

Statistics: hypothesis test, significance test, variance analysis, correlation analysis, t-test, variance analysis, chi-square analysis, partial correlation analysis, distance analysis, regression analysis, simple regression analysis, multiple regression analysis, stepwise regression, regression prediction and residual analysis, ridge regression, logistic regression analysis, curve estimation, factor analysis, cluster analysis, principal component analysis, factor analysis, fast clustering method and clustering method

Data mining: Classification, Estimation, Prediction, affinity grouping or association rules, Clustering, Description and Visualization, complex data type mining (Text, Web, graphics, video, audio, etc.)

Prediction: prediction model, machine learning, modeling and simulation.

Results: Cloud computing, tag cloud, diagram, etc.

To understand the concept of big data, we should first start with "big", which refers to the data scale. Big data generally refers to the amount of data above 10TB(1TB=1024GB). Big data is different from massive data in the past, and its basic characteristics can be summarized by four V’s (Vol-ume, Variety, Value and Veloc-ity), namely, large volume, diversity, low value density and high speed.

First, the data volume is huge. From TB level to PB level.

Secondly, there are many types of data, such as weblogs, videos, pictures, geographical location information, and so on.

Third, the value density is low. Take video as an example. During continuous monitoring, the data that may be useful is only one or two seconds.

Fourthly, the processing speed is fast. 1 second law. This last point is also fundamentally different from the traditional data mining technology. Internet of Things, cloud computing, mobile Internet, Internet of Vehicles, mobile phones, tablets, PCs, and various sensors all over the globe are all data sources or ways of carrying them.

Wonderful Classroom | Shanghai Pudong Vanke Kindergarten makeU Baby Robot Programming officially started.

On October 27th, 2022, makeU Kindergarten, a private Vanke kindergarten in Pudong New Area, Shanghai, officially started its class. It used the mode of "building blocks+physical programming" to bring unprecedented artificial intelligence experience to children.

This is the first class of "Building Block Robot+Physical Programming" in kindergarten. Children can’t wait to open the makeU robot suit in their hands. Under the guidance of the teacher, they build the ideal robot shape step by step, and gradually form the ability of space construction in hands-on practice.

Large-sized blocks are more suitable for teaching in early childhood. In simple block insertion and disassembly, children gradually master the structural knowledge such as interlocking structure and balance structure.

Through the teacher’s lively and interesting explanation, the children have a preliminary understanding of the building block programming robots in their hands: controllers, motors, ultrasonic sensors … In the practice of one interesting project, robots gradually become friends of children, and science and technology are no longer unfamiliar concepts and symbols, so that children can grow up with artificial intelligence from an early age.

When the children are proficient in programming methods, they start a wonderful programming journey with makeU reading pen in their hands.

"Forward, backward, ultrasonic lights on!" Under a series of programming instructions, the robot in the child’s hand "came alive" instantly, and accurately executed the action according to the logical instructions spliced by the child. In this way, the seeds of science and technology are buried in children’s hearts, expecting children to continue watering them with scientific enthusiasm.

The application of artificial intelligence in preschool education is the development trend of future school education reform. Whale robot innovates the form of preschool education activities, adopts the trinity teaching of "play-learn-practice", combines classroom knowledge absorption with challenging activities, fully mobilizes children’s interest, enhances children’s self-confidence, and cultivates children’s ability to solve problems and resist setbacks.

The robot provides products, teaching, training and competition for children’s artificial intelligence education, and the four-in-one comprehensive support is convenient for the kindergarten to carry out teaching activities. The comprehensive toys for children’s programming in Pudong Kindergarten are the ones in the program.

usage scenario

? Literacy promotion class? Special class

program objective

? Cultivate children with core competitiveness.

Curriculum system

Curriculum implementation

? Teach by asking and learn actively.

? CBL(Creation-based Learning) Teaching Method

Core suit

Can meet the requirements of small, middle and large classes in kindergartens,3 years, 6 semestersComprehensive materials.

? Adjustable activity site model

The park used in the package can participate in the officially organized scientific and technological activities and the assessment of children’s artificial intelligence literacy level for free.

Partial product display (makeU 1002)

? Control and electronic components

? Components and auxiliary materials

? Control and electronic components

1. Children’s programming enlightenment toys

usage scenario

? Garden-based courses? Home linkage

2. Artificial intelligence regional games

usage scenario

? Science district? Intelligence area

? Construction area? Programming area

3. Artificial intelligence function room

usage scenario

? 3~6 years old science and technology activities? Game organization

? Show

4. Design of artificial intelligence environment

usage scenario

? Creation of Science and Technology Innovation Environment for Kindergarten in Science District

whalechildEducational robotadvantage

Children’s artificial intelligence enlightenment

The robot will walk with you.

Want to consult children’s artificial intelligence education series products,

Solve problems related to solutions,

Welcome:

1. Call for advice

2. WeChat official account left a message backstage.

We will arrange to answer your questions as soon as possible!

Leading the innovation and development of the industry with strength, what did cloud measurement data do right?

For the whole artificial intelligence industry, there is a great demand for AI technology in the fields including driving, security, finance, industry, medical care, education, etc. The rapid development of AI technology based on machine learning depends on the richness of the underlying big data, and a powerful model needs a data set with a large number of samples as its foundation. The quality and diversity of data will have a significant impact on the success or failure of algorithm models. The delivery of high-precision AI data not only helps the AI industry to land in scenes, but also brings a better user experience.

At the data level, with the development of AI technology, the data scale is constantly improving. According to IDC’s calculation, the global data scale will reach 163ZB; in 2025; At the same time, the AI data service industry has entered the stage of deep customization, and the service of data customization is carried out according to different scenarios and requirements, and the AI data requirements also transition from general simple scenarios to personalized scenarios.

In order to solve the practical problem of AI industrialization, cloud measurement data summed up many experiences and solutions, and used them in practice to help the development of the whole artificial intelligence scene application. Through its own technology, it has overcome the difficulties, designed scientific and standardized data processing processes from task creation to final acceptance, and flexibly met the diverse and high-precision data needs of customers. It has successively launched products and services such as "data scene laboratory", "AI data set management system" and "cloud measurement data annotation platform", providing high-quality, scene-based and large-scale processing of perceived data for many AI-related enterprises such as intelligent driving, smart city, smart home, smart finance and new retail.

Of course, it is not easy to keep the leading position of technology and industry in the tide of artificial intelligence. From the perspective of attack and exploration, it is not difficult to see that the reason why cloud measurement data can become an industry leader is not only due to the toughness of technology and product strength, but also the homeopathic development of service model and service concept, thus continuously injecting new vitality into the artificial intelligence industry and providing new kinetic energy for development.

First of all, data came into the market when the industry was on the rise, and the cloud measurement data with the first-Mover advantage was not satisfied with the dividends at that time, but constantly increased the technical input and improved the production efficiency by improving the technical level. Give full play to the power of "underlying technology+service capability" and provide end-to-end training data service solutions in autonomous driving, smart home, smart city and smart finance and other industries.

At the same time, cloud measurement data keeps forward-looking forecast on the development trends of hot industries and technologies, and prepares relevant tool chains and data service capabilities in advance to ensure adequate preparation to meet new AI data requirements. In the current AI data industry chain, there is a keen discovery of cloud measurement data, and there is still a lack of a systematic data solution for AI engineering. However, this systematic data solution for AI engineering is needed by many industries. In this context, the cloud measurement data industry launched a new generation of data solutions for AI engineering, which was undoubtedly a timely rain for many industry customers and solved their actual needs.

For this reason, cloud measurement data has launched a new generation of data solution for AI engineering. Through the mature data management and labeling platform, this solution can complete system integration with enterprises, support enterprise-defined pre-labeling, algorithm interface, personnel management, project management system and secure delivery of software and hardware support. Under the labeling environment that ensures data privacy and security, it highly supports the efficient circulation of data required by enterprises, continuously performs data processing tasks, and improves the large-scale production efficiency.

For example, in the field of automatic driving, it can realize Data cleaning and labeling in the data closed loop of DataOps (that is, the combination of data and Operations) of automobile enterprises, and improve the circulation efficiency by 2 times compared with the original process; In the aspect of retail goods inspection, through the cloud measurement data labeling platform, the container inspection data continues to flow back, and visual review and modification are carried out based on the pre-labeling results of the algorithm, which improves the efficiency by 3 times compared with manual labeling.

"Walk alone fast, go far". In the era of industrial intelligence, we can’t just rely on one enterprise to fight alone. The double value of industry and society will produce compound interest effect. Cloud measurement data also knows this well. It is also actively promoting the standardization of artificial intelligence data industry, and has participated in the compilation and release of "Requirements and Methods for Marking Point Cloud Data of Intelligent Networked Car Lidar" and "Requirements and Methods for Marking Image of Intelligent Networked Car Scene Data", contributing experience and wisdom to industrial intelligence, and promoting the construction of standardization system in the vertical field of AI data service. In addition, it also participated in the first series of standards of "Model/MLOps Capability Maturity Model", which filled the gap of the development and management standards of machine learning projects at home and abroad.

Summary:

As the vanguard of artificial intelligence data services, cloud data is actively promoting the accelerated development of AI training data services, contributing experience and wisdom to industrial intelligence, thus becoming a new paradigm of industry development. I believe that next, cloud measurement data will continue to improve. While continuously enriching its own service capacity building and deep cultivation technology, it will maximize the value of training data and deliver more excellent data support for artificial intelligence scenes.

Archsummit direct hit | Build a smooth natural flutter page

Instructors

Amoy Technology Department | Leisure Fish Technology | Cloud

"Fully strengthening the flutter fluidity, sharing challenges, online monitoring tool construction, optimization means to precipitate in component containers, and finally optimized advice."

Zhang Yunlong (cloud from), idle fish client experts.Since Netease, byte, Ali is running. At the current Department of Alibaba, there are currently responsible for idle fish APP packages, fluidity, start-up equation experience.

Outline

This sharing revolves around FLUTTER fluidity, respectively: 1.Flutter fluidity optimization challenge; 2. List container and flutterdx component optimization; 3. Performance measurement and devTool extension; 4.Fltter sliding curve optimization; 5. Performance optimization suggestions.

FLUTTER fluency optimization challenge

Business complexity challenge

FLUTTER has always been known by everyone, and the list controls displayed by Flutter Gallery (shown in the left) is indeed very smooth. But the actual business scene (shown on the right) is more complex than the Gallery list demo:

  1. Same card, more and complex (such as rounded) view control;

  2. When the list scroll, there are more view logic, such as scrolling control of other controls and disappearing;

  3. Card controls, there are more business logic, such as a different label, activity price, etc. based on background data, and there is also common business logic, etc.

  4. Because idle fish is an e-commerce app, we need to have certain dynamic capabilities to deal with frequently changed activities. Here we use the Flutter Dynamicx components of Ali to implement our dynamic capabilities.

Framework challenge

Let’s look at the overall flow of the list, here only pay attention to the free scroll phase after the finger is released.

  1. When the finger is released, the initial speed is calculated based on ScrollDragController.end;

  2. UI Thread requests RequestFrame to Platform Thread, and calls BegInframe to UI Thread at Platform Thread.

  3. The UI Thread Animate phase trigger list slides a little distance while registering the next frame callback to Platform Thread;

  4. Ui Thread Build Widget, generate / update the renderObject tree through the three tree DIFF algorithm of Flutter;

  5. UI Thread RenderObject Tree Layout, Paint Generates an Scene object, and finally passed to Raster Thread to draw on-screen;

The above flow must be completed in 16.6 ms to ensure that the frame cannot be guaranteed. Most of the cases, there is no need to build a new card, but when the new card enters the list area, the entire calculation amount will become huge, especially in complex business scenes, how to ensure all calculations within one frame of 16.6ms, Is a small challenge.

The figure above is a sliding devTool sample, and the Carton stage occurs when the new card is on the screen, and the other phases are very smooth, because the scrolling speed is attenuated, so the carton interval is also getting bigger. Because most of the time is very smooth, the average FPS is not low. However, the new card is built at the time of production, which gives us a stylish body feeling.

Challenge of dynamic capabilities – Flutter Dynamicx

The free fish APP card uses the self-developed Flutter Dynamicx to support our dynamic capabilities. Basic Principle: Online Edit Layout DSL, generate DX files and send it. The end side generates the DXComponentWidget by parsing the DX file and combines the back card data, and finally generates Widget Tree. FLUTTER DYNAMICX technology brings dynamic update capabilities, unified monitoring capabilities (such as dxcomponentwidget monitoring cards), good research and development insecurity (online DSL and Android Layout, and optimize Android), online editing capabilities;

But in performance, we also pay a certain price: DX cards add time to the template loading and data binding overhead, Widget wants to recursively create through WidgetNode traverses dynamically, and the view nesting layer will be deeper (followed by later).

Description: Flutter Dynamicx Reference Ali Group DSL Rules Realization

User’s sense of physical challenge

I have already described above, and the card in the FLUTTER list is more obvious.

When Android RecycleView occurs, the physical feel is not obvious, and the FLUTTER list has occurred when the card occurs, not only the time pause, but also a hopping on the OFFSET, and the physical feeling of small card is also changed. It is obvious;

Suppose the list content is simple enough, scrolling does not happen, we also found that the Flutter list and Android RecycleView are not the same:

? Use ClampingscrollPhysics to feel the feeling of similar magnets when the list is stopped.

? Use BOUNCINGSCROLLLPHYSICS, the list is started, and the speed attenuation is faster;

On the 90Hz machine, the early flutter list is not smooth, the reason is that the touch sampling rate is 120 Hz, and the screen refresh rate is 90Hz, causing partial screens to be 2 touch events, part is a 1 touch event, last Resulting in rolling OFFSET effects. When the Flutter 1.22 version, RESAMPLINGENABLED can be used to re-sample the touch event.

List container and flutterdx component optimization

Telling the challenge of Flutter fluidity optimization, now share how you optimize the smoothness and precipitate into PowerScrollView and Flutter Dynamic components.

PowerScrollView design and performance optimization

PowerscrollView is a snarefish team’s self-developing Flutter list assembly, with better packages and supplements on the Sliver Agreement: Data increased deletion, complement local refresh; layout, supplemented the waterfall flow; incident, supplement the card on the screen , Away, scrolling events; control, support for scrolling to Index.

In terms of performance, the waterfall flow layout optimization, local refresh optimization, card division optimization, and sliding curve optimization.

PowerScrollView Waterfall Flow Layout

PowerScrollView Waterfall Flow Layout provides longitudinal layout, lateral layout, mixed arrangement (transverse card and ordinary card mix). Nowadays, most of the listings of the hiped fish are available in PowerScrollView’s waterfall flow layout, such as the home page, search results page, etc.

PowerScrollView Waterfall Flow Layout Optimization

First, through conventional cache optimization, cache each card upper corner X value and which column belonging.

Compared to the Slivergrid card into the list area, the waterfall flow layout, we need to define Page, card admission to create and leave the field destruction need to be units. Before optimization, Page calculates cards in a screen visual area, and in order to determine the starting point Y value of Page, the primary layout needs to calculate the Page N and N + 1 two pages, so the amount of cards involved in the layout calculation is much lower, and the performance is low. After optimization, the approximation of all card height averages calculates Page, which greatly reduces the number of participating in the layout card, and the number of cards destroyed by Page also becomes less.

After the column cache and paging optimization, use the idle fish Self-developing Benchmark tool (follow-up) to compare the waterfall flow and GridView, view the number of frames and the worst frame consumption, can find that performance performance is basically consistent.

PowerScrollView local refresh optimization

Leisure fish products expect users to browse products more smooth, will not be loaded by loadmore, so the list is required to trigger LoadMore during scrolling. FLUTTER SLIVERLIST When the LOADMORE supplement card data, the List control is tender, and the slterlist building will destroy all cards and recreate it, and the performance data can be imagined very bad. PowerScrollView provides a layout refresh optimization: all cards on the cache screen, no longer recreate, ui thread Optimize from the original 34MS to 6MS (see the lower left picture), the right image is viewed by Timeline, the depth and complexity of the view built Optimize.

PowerScrollView card fragmentation optimization

The second figure 2 card is the early search results page of the idle fish, and it is not a waterfall flow. To view the Timeline chart when the card is created (adding DX Widget creation and PerformLayout overhead), you can find that the complexity of the card creation is extremely large. On the normal mid-range machine, the UI Thread consumes more than 30ms, to be optimized to 16.6ms It is very difficult to use routine optimization. For this purpose, two cards can be disassembled, and each frame is used to render.

Directly see the source code, the basic idea is to mark the card widget, when the card is true, the right card first _BuildPlaceHoldercell builds the Widget (empty Container), and register the next frame. In the next frame, the right card is modified with NeedShowRealcell for True, and self-laminate, and then build real content.

Is it delayed to build a true content of the card, will it affect the display content? Because the FLUTTER list has a cacheextends area on the visual area, this part of the area is not visible. In most scenarios, users don’t see the scene of the blank card.

Also using the FLUTTER BENCHMARK tool to perform performance test, you can see 90 points before and after the card division, 99 packet consumption has a significant downgrade, and the number of lost frames is also reduced from 39 to 27.

Note Here, when listening to the next frame, you need widgetsbinding.instance.scheduleframe to trigger the RequestFrame. Because when the list is displayed, it is possible because there is no callback from the next frame, resulting in the task of the delay display queue, eventually makes the first screen content display is incorrect.

Delayed framing optimization ideas and suggestions

Comparison of Flutter and H5 design:

  1. DART and JS are single-threaded models that need to be sequenced and deserialized across threads;

  2. Flutter Widget is similar to H5 VDOM, there is a DIFF process.

Early Facebook In React Optimization, the Fiber Architecture is proposed: Based on the VDOM Tree’s Parent Node → Sub-node → Brothers Node → Sub-node, the VDOM Tree is converted to the Fiber data structure (chain structure), and the reconcile phase is implemented. Interrupt recovery; based on the Fiber data structure, the control section continues in the next frame.

Based on React Fiber thinking, we propose its own delayed framing optimization, not just left and right card size, further, render content disassembled as the current frame task, high-excellent delay task and low delay tasks, the upper screen priority is sequentially changed Low. Where the current frame task is the left and right white Container; the high-optovel delay task is exclusively frame, where the picture portion also uses Container placeholders; in the idle fish scene, we dismantled all DX image widget from the card, as low as low Excellent delay tasks and is set to no more than 10 in one frame consumption.

By disassembling the 1 frame display task to 4 frames, the highest UI on the high-end machine will be optimized from 18 ms to 8 ms.

Description 1: Different business scenes, high-yogle and low-probing task settings have different description 2: Slide on the low-end machine (such as Vivo Y67), the sub-frame scheme will let the user see the list whitening and content Upable process

FLUTTER-DYNAMICX Component Optimization – Principle Explanation

Edit the "Class Android Layout DSL", compile the binary DX file. The end side is downloaded, loaded, and resolved, and the WidgetNode Tree is generated. See the right figure.

After the business data issued in the background, the Widget Tree is generated by recursively traversing WidgetNode Tree, and finally appears.

Description: Flutter Dynamicx Reference Ali Group DSL Rules Realization

FLUTTER-DYNAMICX Component Optimization – Cache Optimization

I know the principle, it is easy to discover the flow in the red box in the picture: binary (template) file parsing load, data binding, Widget dynamic creation has certain overhead. To avoid repeated overhead, we have cached DXWIDGETNODE and DXWIDGET, and the blue selection code shows the Widget cache.

FLUTTER-DYNAMICX Component Optimization – Independence ISOLATE Optimization

In addition, the above logic is placed in a stand-alone ISOLATE, and the maximum amount is lowered to the lowest. After the line technology grayscale AB experiment, the average carton bad frame ratio is reduced from 2.21% to 1.79%.

FLUTTER-DYNAMICX Component Optimization – Hierarchical Optimization

FLUTTER DYNAMICX provides class Android Layout DSL, adds Decoration layers to implement each control Padding, Margin, Corner, adds the Decoration layer; to implement the DXContainerRender layer. Every layer has its own clear duty, the code level is clear. However, since the increase in 2 layers caused the Widget Tree hierarchy, the DIFF logic of 3 trees became complicated and the performance becomes low. To do this, we merge the Decoration layer and the DXContainerRender layer, see the middle Timeline diagram, which can be found that the optimized flame grading and complexity becomes low. After the line technology grayscale AB experiment, the average carton bad frame ratio is reduced from 2.11% to 1.93%.

Performance measurement and devtool extension

Tell the optimization tool, which is described here to make a measure of how to measure, and the build / extension of the tool.

Offline scene – Flutter BenchmarkWhen the FLUTTER is detected, the calculation consumption on the UI Thread and Raster Thread is required. So the Flutter optimizes before and after comparison, using the time consuming data of the UI Thread and Raster Thread of each frame.

In addition, the fluency performance value is affected by the operating gesture, the scrolling speed, so the error based on the measurement results of manual operations will have errors. Here, use the WidgetController control list control FLING.

The tool provides the interval between the scrolling speed, the number of scrolls, the scroll, and the like. After the scrolling test is completed, the data is displayed by the UI and Raster Thread frame, 50 points, 90 points, and 99-positioned frame consumption, and give performance data from a variety of dimensions.

Offline scenario – Based on the recording screen

Flutter Benchmark gives multi-dimensional measurement data at the Flutter page, but sometimes we need a horizontal comparison competition app, so we need to have a tool transverse to more different technologies. The idle fish is self-developed in the Android side to self-developed the recording screen data. Imagine the mobile phone interface into multiple screens, get the screen data (byte arrays) (byte arrays) by sending VirtualDisplay, interval 16.6 ms, using the Hash value of the byte array represents the current picture, the current 2 The Hash-read hash value is unchanged, and the Carton is considered.

In order to ensure that the fluency detecting tool app itself does not have a carton, it is read, which is compressed, and the compression ratio on the low-end machine is higher.

Through the detection of the tool without invading, a rolling test can be detected, the average FPS value (57), the frame distribution is variance (7.28), 1S time, the large number of large cards (0.306), large card cumulative time (27.919). Intermediate array display frame distribution: 371 represents the number of normal frames, 6 generations 16.62ms of small cardon quantity, 1 generation 16.63MS quantity.

Here is the definition of the big Carton: Carton, greater than 16.6 * 2 ms.

Offline Scene – Performance Detection Based on DEVTOOL

In addition, the scenes of the idle fish are also extended DevTool. In a Timeline map extended time-consuming, greater than 16.6ms red highlight, convenient development.

Online scene-Flutter high available detection FPS implementation principle

Online scene, idle fish self-developed Flutter high available. The basic principle is based on 2 events:

  • Ui.window.onbeginframe event

    • Engine notifies the VYSNC signal arrival, notify UI Thread to start preparing the next frame building

    • Trigger schedulerbinding.handlebeginframe callback

  • Ui.window.ondrawframe event

    • Engine Notification UI Thread Start Draw Next Frame

    • Trigger schedulerbinding.handledrawframe callback

Here we have recorded a frame start event before the Handlebeginframe processing, and the end of the frame is recorded after HandledrawFrame. Each frame here needs to calculate the list control offset value, and the specific code implementation is implemented. When the entire accumulated exceeds 1, executes a calculation, filtering out the scene without scrolling, calculates the FPS value using each frame.

Online Scene – FlutterBlockcanary Line Stack Stack Detection

After using Flutter high available to get the online FPS value, how to locate the stack information, you need to collect stack information. Free fish collects carton stacks using the self-developed Flutterblockcanary. The basic principle is that the signal is transmitted in the C layer, such as 5ms once, each signal receives the Dart Ui Thread stack collection, the resulting series of stacks are aggregated, and the same stacks in a row are considered to have occurred in Carton, this This stack is the stack of Carton we want.

The following figure is the stack information collected by Flutterblockcanary, and the middle framefpsRecorder.getscrolloffset is a Carton call.

Online scene – FlutterBlockcanary Detects overreservation

In addition, FlutterBlockcanary also integrates over-rendering detection capabilities. Replace the Buildowner object by replying widgetsflutterbinding, replacing the buildowner object, and rewrive the ScheduleBuildFor method to intercept Element. Based on the dirty ELEMENT node, extract the depth of the dirty node, the number of direct child nodes, the number of all child nodes.

Based on the number of all child nodes, in the idle fish details page, we are positioned to the "Quick Question View" during scrolling, and the number of transes and all child nodes are too large. View the code, positioning the view hierarchical level, by sinking the view to the leaves node, the number of stasible Build nodes is optimized from 255 to 43.

FLUTTER sliding curve optimization

The front told Tarton optimization means and measures and standards are mainly surrounded by FPS. But from the user’s physical feel, we found that Flutter also has many optimal points.

FLUTTER list slide curve and native curve

Compare the scroll curve of OFFSET / TIME, you can find that the Flutter BouncingScrollsimulation and iOS scroll curve are close, Clampingscrollsimulation and RecyClerView are close. Check the Flutter source code, it is true.

Because BouncingScrollsimulation has rebound, many pull-down refreshes and load more features are based on BOUNCINGSCROLLSIMULATION package, which causes the Flutter page sliding, physical and native Android pages inconsistent.

Flutter list performance and optimization under fast sliding

Although the Clampingscrollsimulation slides and Android RecyclerView is close, but in the quick sliding scenario, you can find that the flutter list scrolls quickly stops, and quickly slides. For the reason, you can see the moment that the sliding curve is stopped, and the speed is not a decline, and it will speed up, finally reach the end point, and stop. Based on the source code formula, the curve can be discovered that flutter clampingscrollsimulation is approximated by the Formula Fitting Method to approximate the Android RecyclerView Curve. In the case of rapid sliding, the focus of the formula curve is not 1 corresponding value, but the right image is broken, the speed will become fast.

It can be understood that the FLUTTER’s formula fit is not ideal. In the near future, there is also a PR proposed using DART to implement the RecyclerView curve.

Flutter list performance and optimization in the case of Carton

The first chapter is mentioned in the case of the same FPS, such as the FPS 55, the native list feels smooth, and the styles of the FLUTTER list are more obvious. One reason here is that the native list usually has multiple thread operations, and there is a lower probability of the big Carton; the other reason is that the same small carton’s body, FLUTTER has obvious statter, and the native list can’t feel. So why?

When we build cards, we deliberately create small Carton, compare the flutter list and RecyclerView before and after, and you can find that RecyclerView Offset does not hop, and the Flutter curve has a lot of burrs, because Flutter scrolling is based on D / T curve calculation, When a carton occurs, △ t doubles, and OFFSET also trips. It is also because of time pause and offset jump, let users know that the Flutter list is not unstoppable in small Carton.

By modifying the Y=D (T) formula, in the case of Carton, ΔT-16.6ms will ensure that the small Carton case is not hopped. In the case of Great Carton, it is not necessary to reset the ΔT to 16.6ms, because in the parking time, it has been clearly allowed to give the user to feel the carton, OFFSET does not have a trip only to make the list rolling distance short.

Performance optimization

Finally share some suggestions for performance optimization.

  1. In optimization, we should pay more attention to the user’s body, not only the performance value. The upper right map is visible, even if the FPS value is the same, but the taste occurs, the body feels clearly; the bottom of 2 game recording screens, the left side average 40 fps, the average of 30 fps, but the body feels is more smooth .

  2. Not only should I pay attention to the performance of UI Thread, but also pay attention to the overhead of Raster Thread, such as the characteristics / operation of Save Layer, but also causing Carton.

  3. In terms of tool, it is recommended to use different tools in different scenarios. It should be noted that the problem of tool detection is a stable reproduction problem or the occasion of data jitter. In addition, it is also necessary to consider the performance overhead of the tool itself, and the tool itself needs to be as low as possible.

  4. In terms of optimization ideas, we must broaden the direction. Most optimized ideas of Flutter are optimized computing tasks; and multithreading direction is not, refer to the independent ISOLATE Optimization of Flutter Dynamicx; in addition, it is difficult to digestive tasks for one frame Whether it is possible to disassemble multiple frame time, try to make a card per frame, priority to the user.

  5. Finally, I recommend paying attention to the Flutter community. The Flutter community continues to have a variety of optimization, regularly upgraded Flutter or dimensions, CHERRY-PICK optimization submission, is a good choice.

Performance analysis tool usage suggestions

Flutter tool, the first push is the official devtools tool, the Timeline and CPU Flammatic maps can help us discover problems well; in addition, Flutter also provides a wealth of Debug Flags to assist our positioning problems, familiar with each Debug switch Role, I believe that there will be no homage to our daily research and development; in addition to official tools, performance logs are also good auxiliary information, as shown in the lower right corner, the idle fish Fish-Redux component outputs the task overhead in the scroll, can It is convenient to see that at that moment.

Performance analysis tools themselves

Performance testing tools inevitably have certain overhead, but must be controlled within an acceptable range, especially on the line. A case in front sharing the FLUTTERBLOCKCANARY detection tool, discovers the framefpsRecorder.getscrolloffset time consumption, and the logic is just that Flutter is highly available to scroll offset. See the right front source code of the right picture, each frame needs to be recursively traversed to collect RenderViewPortBase, which is a small overhead. Finally, we avoid the repetition calculations during the scroll through the cache optimization.

Carton optimization suggestions

Reference official documents and excellent performance articles, precipitated a lot of routine optimization methods in the UI and GPU side, such as refreshing the minimum widget, using itemextent, recommended using Selector and Consumer, etc., avoid unnecessary DIFF computing, layout calculation, etc. Reduce SAVELAYER, replace half-transparent effects using images, alleviate the overhead of the Raster thread.

Because of the reasons, only part of the sequence, more common optimization suggestions see the official documentation.

Use the latest Flutter Engine

As mentioned earlier, the Flutter community is also active, Framework and Engine layers have an optimized PR income, which mostly can make the business layer without perception, and better optimize performance from the bottom viewing angle.

Here, there is a typical optimization scheme: existing flutter solution: When each VSYNC signal arrives, it triggers the build operation. At the end of Build, start register the next vsync callback. In the case where a carton does not occur, see Figure Normal. However, in the case of carton, see Figure Actual Results, just over 16.6ms here, because it is a registration listening to the next vsync callback, triggered the next build, for this, a large amount of time in the middle. Obviously, what we expect is, at the end, immediately execute, assuming enough to execute enough, this time the screen is still smooth.

If the team allows, it is recommended to upgrade the flutter version regularly; or maintain your own Flutter independent branch is also a good choice. From the community Cherry-Pick optimization, you can guarantee that business stability can also enjoy the community contribution. In short, I recommend you to pay attention to the community.

Summarize

In summary, the challenges, monitoring tools, optimization methods, and recommendations are shared by Flutter fluidity optimization. Performance optimization should be people-centered, develop monitoring indicators and optimization points from actual physical fitness; fluency optimization is not one, the above share is not all, there are many optimized means to pay attention: How to better multiplex Element, how to avoid Platform Thread busy leading to vsync signal lacking, etc., is a point that can be concerned. Only the continuous technical enthusiasm and conscious spirit can optimize the APP performance to the ultimate; technical teams also have access to open source communities, other teams / companies to connect, That stone stone, Can be attacked.

What is the relationship between the first batch of the first batch of millet and Qualcomm?

Zhongguancun Online Message: Lenovo Interpise Hu Xiaomi’s starting flagship chip is the second time.The first time is two years ago, Snapdragon 855, the responsible person of the Lenovo mobile phone business at that time was still the manner we are familiar with the shopkeeper.

After a year, Lenovo once again grab the first hair of the Snapdragon High-end flagship chip and did the world’s first.Where is Xiaomi?The first Qualcomm flagship chip has suddenly collapsed for many years?

In addition to the high-end flagship chip, the second flagship dragon 870 is also losing from Xiaomikou.This makes us have to doubt, is Xiaomi can’t work?The first shot becomes the first batch. It may be better twice. If it is a regular loss, there is really a problem.

Hand handle teaching | B-end product manager resume writing guide (including professional speaking + multi-set virtual resume template)

Editing the Initiator: Most of the modules contained in your resume, but you have to write a good resume, but it is not so easy. So as the B-terminal product manager, how to write a highlight, and practical, can fully display your ability? In this article, the author sumizes a resume writing guide for the use of the B-terminating B-terminal product manager, and let’s take a look.

I. Introduction

The resume this thing sounds very simple, but it is difficult to write. Because thousands of people, everyone has their own preferences and understanding of resumes. This article is a summary of my individual’s resume on the B-terminal product manager, and many of whom have compared personal emotional color, so if you have different opinions, you can adjust it.

This article is mainly suitable for product manager reading, learning from B-terminal agency, learning from learning, not suitable for new people in the school, because I am not very familiar with the school, there is no saying, so new people Pay attention to reading when reading …

Second, the resume structure

I use the same structure in the three versions, but also similar to the commercial system of product managers on the market, a total of 5 large modules, as shown below:

Many people will entangle whether to write self-evaluation, my suggestion is to write, self-evaluation is a key module in your resume, you should reflect your highlight in this place, concentrate the essence, not to let the interviewer work and work and Exploition in the project experience.

Some people will put this module in the end, my suggestion is in front, this is the same as "the words" in the "Pyramid Principle", first out of the show, it is right.

Basic information This piece is basically less than going to turn, but some details need to be noted that I have marked, such as gender, age, working years, job hunting, these needs to be carefully checked.

Next, let’s take a look at the basic information I wrote in the virtual resume I wrote. This resume is written in MarkDown, so I use a table mode to renderate the layout of this fragmentation.

The resume owner is "Mai Feifei" and "Mai Xiaofei", which refers to the product manager for more than 5 years, respectively, and the overall structure of the resume. It is only some experience and personal highlights. It will be different.

Basic information of Mai Feifei

Basic information of Mai Xiaofei

Self-evaluation is a wave of king bombings, don’t write some "lively, optimism, hard work, high conscious, strong implementation ability, and brave enough to accept new challenges", you have to pinch this piece, The words are said to pick out the contents.

For B-terminal product managers, I suggested to write from big to small structures. That is to write from the industry / company, then the project / department, and finally go to the individual’s hierarchy, so that the HR or interviewer is more inseparable, it also meets the MECE principle, more structural.

Mai Fei’s self-evaluation

Mai Xiaofei’s self-evaluation

Some of the self-evaluation in the picture may be very long because I deliberately wrote some introductors to do some inspiration. Everyone pays attention to the reduction, control space and text density when writing. If you have other bright spots or additional to add, you can also put it here, such as some excellent works, awards or very matched jobs.

Work experience is my longest module because simple write work experience is very simple, but it is necessary to put out the things and grades made in a company, but also the content of virtual creation, it is too difficult to me.

Work experience is highlighted,What is the main thing to do, then make any achievements?. If some content is not very easy to write, then refine some keywords, used to match the job description in the recruitment information, try to let HR see you have done this piece.

There is dry goods to write dry goods, there is no dry goods, you can write keywords, or you can bold keywords and increase visual implies.

Mai Fei’s work experience

Whether this module is to add "Company introduction" and "work performance", I asked some friends, everyone is not the same, so I have two versions, everyone compares to see if I want to choose one Content. I can’t write two versions, write multiple resumes, cross-type delivery, see which effect is good.

Mai boats work experience personal recommendations work experience and project experience can write separately, more work is to look at your past experience to be what the company until how long, what about outstanding contributions; and whether the project experience is emphasized and job requirements match the value which items you have done in the past, experience and current job whether these projects need is match fit, and have not done any items from 0-1, if there is enough experience and richness so on.

Project experience resume is the second most important content, I think it is the first "self-evaluation."

To experience it is difficult to write the project, because the project is too eat this stuff up vertical experience, and many people experience in the project will increase the realism write some data, but this can easily be caught little mistake during the interview, if the data not familiar with the blind or made, impact on the interview is still getting bigger.

STAR project experience are generally used to write the law, the truth is easier said than done, I sometimes not very good grasp. Because some projects may not have the results, or achievements expressed some projects are fundamentally bad.

Mai Feifei project experience 1

Mai Feifei project experience 2

About STAR project experience is the law of writing, there are many variants, so I wrote two templates, we can compare and see in the end what is more appropriate.

Mai boats of project experience 1

Mai boats of project experience 2

About project experience this piece, I wrote a long time to find a lot of resumes learn from this content piece, and found that in fact a lot of people are content to write the contents of specialized jargon or words of surgery. Read up on the feeling I did not say to what point, but the actual viewing resume when it seems not affect my subjective feelings on this resume.

A small amount of jargon explained resume appears, in fact, does not affect how I feel about the whole resume, as long as you can pass in place some key words, other innocuous content will not look at your resume when a closer look.

End of the article I will put some jargon and terminology of my collection, there are times when nothing resume writing, or what there is no inspiration, they can learn from it.

In view of space reasons, I suggest that educational experience and credentials can be combined together to write, if you experience a lot of education, there are many certificate, and then it split into two modules is also possible.

This piece of content will basically not overturned, a point to note is that the issue of education. Sometimes the company will first pay greater attention to education, so it is best marked on the resume that he is full-time or part-time. Frank, it could reduce HR and the interviewer resume screening of trouble, but also disguised himself saves time.

Third, the supplemental content

Cliches is good, but still have to pay attention to control the tempo and length, do not write too many false big empty of content. The following excerpts from the network, part of my own finishing.

  1. Responsible for requirements gathering and analysis, functional design and optimization iterations and outputs corresponding documentation, to promote the development team on the test line.
  2. 0-1 responsible for product research, Competitive Analysis, product architecture and functional design.
  3. Integrated front and back office business processes and product functional design, make products according to business restructuring plan.
  4. 0-1 responsible for the demand for research, process, functional design, coordination of resources, promote the development of the product line, get the demand side feedback, follow up subsequent optimization products.
  5. Closed-loop project management, combing the project needs, coordinate business resources to promote the project on time and on line, and track user feedback, combined with custom business scene and have optimized product strategy, to ensure product quality and effectiveness.
  6. Xx responsible for the design of new features, functionality has been updated iteration. Familiar with Axure, Xmind output tools such as product design project prototypes, PRD, organizational needs assessment.
  7. Cross-sectoral coordination and communication, and promote close cooperation UI, developers, operators and other personnel, time and shelf life to reach the needs of landing on the line and follow the subsequent iterative optimization work.
  8. Xx user is responsible for product research, butt-related business, combing scenarios, business process and user needs and build demand pool, and outputs a flowchart brain map, the demand loop, constantly optimize the product user experience.
  9. Summary analysis of competing products, development of competing products daily attention and industry trends, product planning and product positioning.
  10. Continue to focus on the new line of functional data and analysis, continuous construction and optimized according to business needs and performance data to improve product strategy.
  11. Tracking on-line data, with some data analysis capabilities to understand the AB test design process, and summarize results.
  12. Brainstorming and evaluation, AB output test design, track, analyze, summarize new and old versions of data on-line, data mining features behind user behavior, merit-based on-line.
  13. Responsible for the overall business platform product planning, functional design and development team collaboration, to promote the realization of the project landing.
  14. Responsible for the core business (xxx module, xxx modules, etc.) carding business processes, product finishing logical rules, develop and implement programs and other projects iteration.
  15. Responsible for product requirements analysis and management, combing business processes, the design and planning of the corresponding product features, write documentation requirements, coordination within the organization and external resources to achieve product goals and project management.
  16. Responsible for collecting user feedback, monitoring service module (xxx module, xxx module), analyzes user data, tracking product performance and progress feedback, timely adjustment of product strategy.
  17. Xxx is mainly responsible for the overall planning and supply chain planning iterative versions of related products, including: Market Analysis, Competitive Analysis, prototype and PRD organize, and collaborate and R & D team, complete version of the product line development.
  18. Responsible for product lifecycle management, including internal publicity, promotion and training on the product line to protect business people familiar with product features and functionality to help solve business problems.
  19. Orders operational data analysis, improved scheme is proposed to improve the ease of use of the system, improve the user experience ordering system.
  20. Determine the development plan, coordinate project resources, track project progress, to successfully complete the project and achieve goals.
  21. Mainly responsible for the company’s order OMS system, the product demand analysis and system design work of warehousing WMS system, participating in the integrated plan of upstream related systems and project management work in multi-projects.
  22. Responsible for conducting daily demand communication and organizing relevant documents with each business unit, providing product side solutions. Responsible for communication with regional information division and business units, follow-up project plans and progress.

For the things that the data class needs the door to the door, it is not good to be stuck in the interview. For B-terminal products, this piece should be cautious, don’t write data, because B-terminal data is often sensitive or less easily reflected in the achievements.

  1. Through the optimization iteration of product / XX functions, the ease of use of products is improved, and the order rate is improved, enabling the company’s unity, and the transformation of orders will increase XX%.
  2. After optimizing the XX function / launch XX function, upgrade DAU, upgrade from XX% to XX%
  3. With XX initiative, the re-purchase rate is increased from XX% to XX%.
  4. With XX initiative, the XX page has increased to XX%.
  5. Through the online line of XXX, the demand for XXX is met, and the company’s core customers are introduced to the company.
  6. The XXX feature has opened the XXX and XXX two systems, which promotes the increase in the increase in the amount of increase and the increase in the volume, and the amount of XXX is increased in half a year.

Detailed project experience Description When you are fit, talk, your resume can’t write so much, your resume picked up the business background of the project. What kind of crowd, what kind of person, what product is to do, what kind of modules, the main functions, what kind of user can contact your product, how to use it. If the product level of the product is very large, you can refer to the product’s user-level and some good-looking data on your resume.

Reference on this piece of resume is as follows, this content is basically my own finishing:

  1. From 0 to 1 to the various work of the project, including business research, feasibility analysis, demand analysis, output project research report, forming overall product side programs, warehouse on-site testing, special training, assessment, practical, online Quality tracking, etc.
  2. Optimize the process of warehouse operations, including procurement, supplier returns, guests retreat, bunk room transfer, library, inventory adjustment, inventory payment and other processes.
  3. From 0 to 1 to the project, including the project pre-service business demand research, business process carding, system plan sorting, demand document finishing, system prototype design.
  4. Follow the progress of system development, coordinate resources, and ensure that the product project is completed on time and delivered high quality. Product function test, business person training, internal product preaching, etc. before the system is online.
  5. From 0 to 1 to the work of the project, responsible for procurement business research, output project research report, forming overall solutions, online function testing, business person training, online function optimization iteration, etc.
  6. Optimize the procurement process, including procurement applications, purchase orders, procurement returns, procurement source management, procurement depals and other processes.
  7. Optimize the financial management part of the process, including the procedure, payment, payment, inventory accounting, multi-service data payment, etc.
  8. Responsible for the product planning and design of the special line small package business, including: supplier docking, trajectory platform docking, ERP docking and other upstream links.
  9. In combination with business unit requirements, introduce more international logistics, docking logistics providers interface, acquisition, tracking number, and trajectory information.
  10. Improve the circulation relationship between logistics orders and logistics states, and make visual reports present logistics trajectory and their proper effect.
  11. Connect the online base information and status mapping of the mainstream cross-border e-commerce platform.
  12. Improve the order-margin, the optimal logistics method match, the intelligent trial rule configuration, improve the order performance efficiency, save the operating costs.
  13. Optimize order performance traffic and state flow, improve the functional operation of order profit accounting rules and dismantling orders.
  14. Optimize the order refund process, open the order after-sales and customer service, warehouse, and financial roles complete business interaction chain.
  15. Responsible for the demand analysis of OMS, WMS, Operation Platform, system product program planning, functional design, demand, iterative management, external customers, internal teams and system docking organization coordination.
  16. Responsible for platforms from 0 to 1 demand analysis, program planning, function design, iterative management. Complete the platform-based XXXX docking item, XXX docking items. 17. Responsible for WMS and APP system plan planning, demand research analysis, system design, demand management.

Fourth, summary

There are many articles on the resumes on the market, and the content is more information, which is a difficult thing to find a good information.

So in order to avoid the next time you look for a similar tutorial, I will still have to write a tutorial. On the one hand, I will meet the needs of my future, and I can communicate with you in advance, open my own Idea.

By writing this slow thinking process, you will make a summary of this piece of knowledge. It is very big for my personal help, and it is also hope that my contribution can bring some help to friends who need it, because I have a public The positioning is the comparison of vertical and grounding, if you are also a supply chain-related product, then some cases you can take it.

My name is Vitamin ,. Former PHPER, I have done online education products, and I have also done more than 4 years of cross-border warehousing logistics, and is currently a supply chain product manager in foreign trade SAAS. Mainly focused on WMS / OMS / TMS / BMS / ERP and other fields, sharing the supply chain related product knowledge.

This article is published in everyone who is product manager. It is prohibited from reprint without author license

Questtery from unsplash, based on CC0 protocol

I put the essence of Python coroutine Pa was clean!

This article is a large amount of information, from IO multiplexing, to use the generator, then async, await realization of the principle behind it, in simple terms, the analysis was very thorough, very hardcore!

Two days for personal reasons because it did not touch a long time to write a point of Python, which involves "coroutine" program, the last time out, it is Web framework tornado unique feature, now we have async, await keyword support . Thought about its implementation, reviews the evolution of these years, feel a little bit mean.

They are single-threaded, why the original code with the low efficiency of the async, await add some asynchronous library becomes higher efficiency?

They are single-threaded, why the original code with the low efficiency of the async, await add some asynchronous library becomes higher efficiency?

If you do Python-based network or Web development, this question has puzzled, this article attempts to give an answer.

Before beginning 0x00

Firstly, Not take you browse the source codeAnd then tell you the control to achieve the original code Python standard. Instead, we will set out from the real problems, think of solutions to the problem, step by step evolution path experience solutions, and most importantly, hoping to gain knowledge in a systematic upgrade process.

This article only provides an independent direction of thinking, does not follow the historical and current actual implementation details.

Secondly, you need to read this article familiar with Python, at least understand the concept generator generator of Python.

0x01 IO multiplexing

This is the key to performance. But we are here only to explain the concept, its implementation details is not the point, which we understand Python coroutine enough, as already know enough about, advances to 0x02.

First, you want to know all the network service program is a huge loop, your business logic can be called at some point in this cycle:

defhandler (request):

WHILETRUE:

# Get a new request

request=accept

# To get users to write business logic function according to the route map

Handler=GET_HANDLER (Request)

Handler (Request)

Imagine your Web service of a handler, after receiving a request requires a response to the results of API calls.

For the most traditional network applications, your API requests issued to wait for a response after this time the program stops running, even new requests have to get in after the end of the response. If you rely on an API request packet loss seriously, especially in response to slow it? That will be very low throughput applications.

Many traditional Web server using multi-threading technology to solve this problem: the run handler is placed on other threads, each dealing with a request, this does not affect the new thread blocks request to enter. This problem can be solved to some extent, but for larger systems concurrent, multithreaded scheduling will bring significant performance overhead.

IO multiplexing can be done to solve the problem without the use of threads, it is provided by the operating system kernel functions, we can say specifically for this type of scenario for us. Simply put, your program encounters network IO, tells the operating system to help you staring at, while the operating system gives you a way to make you can feel free to get what IO operation has been completed. like this:

# # 操作 系统 复 复 示 示 例 代代

# Register the ID and type of IO operations to the operating system IO

IO_REGISTER (IO_ID, IO_TYPE)

# Get completed IO operations

Events=IO_GET_FINISHED

For (IO_ID, IO_TYPE) INEvents:

IFIO_TYPE==Read:

Data=read_data (IO_ID)

Elifio_Type==Write:

Write_data (IO_ID, DATA)

Gring the IO multiplex logic into our server, probably like this:

Call_backs={}

Defhandler (REQ):

# do jobs here

DefCall_back (Result):

# Use the returned Result to complete the remaining work …

Call_backs [IO_ID]=CALL_BACK

# New cycle

WHILETRUE:

# Get the completed IO event

IFIO_TYPE==Read: # read

Data=read (IO_ID)

Call_back=call_backs [io_id]

Call_back (data)

Else:

# Other types of IO event processing

PASS

# Get a new request

Handler (Request)

Our Handler has returned immediately for the IO operation. At the same time, each iteration will perform a callback over the completed IO, the network request no longer blocks the entire server.

The pseudo code above is only for understanding, and the details are more complicated. Moreover, it is also possible to connect the new request to the IO event from the operating system to the monitor port.

If we split the cycle part with a call_backs dictionary to a separate module, we can get an EventLoop, which is the iOLOOP provided in the Python Standard Library Asynci.

0x02 with generator to eliminate Callback

He focuses on the Handler function written in our business, after having independent iOLOOP, it now becomes like this:

# 业 业 代 代 … …

# Need an API request

Print (Result)

ask_LOOP.GET_EVENT_LOOP.IO_CALL (API, CALL_BACK)

Here, performance problems have been resolved: We no longer need multi-threads to constantly accept new requests in the source, and don’t have to rely on the API response.

But we have also introduced a new problem. The original business logic code is now demolished. The code before requesting the API is still normal. The code after the request API can only be written in the callback function.

Here our business logic has only one API call. If there are multiple APIs, plus the call to Redis or MySQL (their essential is also a network request), the entire logic will be split, this is a burden on business development .

For some languages ??with anonymous functions (right is Java), it may also trigger a so-called "turning hell".

Next, we find way to solve this problem.

We can easily think that if the function can be suspended after running to the network IO operation, it will wake up at the breakpoint after completion.

If you are familiar with Python’s "Builder", you should find that it happens to have this feature:

Defexample:

Value=yield2

Print ("Get", Value)

ReturnValue

g=esample

# 启 启 生器, we will get 2

Got=G.send (NONE)

Print (got) # 2

TRY:

# Anti-start will display "get 4", which is our incoming value

Got=g.send (got * 2)

ExceptStopItemization ASE:

# Builder runs, will print (4), E.Value is the value of generator return

Print (E.Value)

There is Yield keyword in the function, and the call function will get a generator, and a key method for generator can interact with the generator.

G.send (none) runs the generator code until you encounter Yield, and return to the object, that is, 2, the generator code is stopped here until we perform G.send (got * 2) again, The 2 * 2 is also 4 to assign the value Value in front of Yield, and then continue to run the generator code.

Yield is like a door, you can send a thing from here, you can also take another thing.

If Send makes the generator to run until the next yield is over, the Send call will trigger a special exception STOPITERATION, which comes with a property Value, which is the value of the generator Return.

If we convert our Handler to a generator with Yield keyword, run it to The specific content of IO operationReturns, put the IO result back and restore the generator to run, then solve the problem of uncomfortable business code:

# 业 业 代 代 … …

# Need to execute an API request, directly put the IO request information yield

Result=yieldio_info

# Use the result returned by the API to complete the remaining work

Print (Result)

# This function is registered in iOLOOP, used to call back when there is a new request

Defon_Request (request):

Handler=GET_HANDLER (Request)

g=Handler (Request)

# 首 首 启 获得 获得 i 获得

IO_INFO=G.send (none)

g.send (Result)

ask_LOOP.GET_EVENT_LOOP.IO_CALL (IO_INFO, CALL_BACK)

The above example, the Handler code written by the user will not be dispersed into the callback, and the ON_Request function interacts with Callback and IOLOOP, but it will be implemented in the web framework, which is not visible to the user.

The above code is enough to give us inspiration of Callback destroyed with the builder, but there are two points:

  1. Only a network IO is initiated in business logic, but it is often more

  2. Business logic does not call other asynchronous functions (helping), but in practice, we tend to call other levels.

Let’s take a more complex example:

Among them, Request executes real IO, FUNC1, FUNC2 is only called. Obviously our code can only be written:

Deffunc1:

Ret=yieldfunc2 (re)

returnret

Deffunc2 (DATA):

ReturnResult

DEFREQUEST (URL):

# This simulation returns an IO operation, contains all information about the IO operation, where the string is simplified

Result=yield "IOJOB OF% S"% URL

ReturnResult

For Request, we expose the IO operation to the framework through Yield.

for Func1 and func2, calling request, clearly add Yield keywords Otherwise, the request call returns a generator and will not be paused and continue to perform subsequent logic obviously errors.

This is basically that we don’t write asynchronous code in the Tornado framework without Yield from, Aysnc, AWAIT.

To run the entire calling stack, the approximate process is as follows:

  1. Call FUNC1 to get the generator

  2. Call Send (None) Start it gets the result of request ("http://test.com/foo"), or generator object

  3. Send (none) Starts the generator generated by the request, gets the IO operation, registered by the frame to IOLOOP and specify a callback

  4. The Request Builder wakes up in the callback function after IO, and the generator will go to the return statement end

  5. Capture an exception to get the return value of the Request Builder, wake up the last layer of FUNC1, and get a FUNC2 generator

  6. Continue to execute …

Call FUNC1 to get the generator

Call Send (None) Start it gets the result of request ("http://test.com/foo"), or generator object

Send (none) Starts the generator generated by the request, gets the IO operation, registered by the frame to IOLOOP and specify a callback

The Request Builder wakes up in the callback function after IO, and the generator will go to the return statement end

Capture an exception to get the return value of the Request Builder, wake up the last layer of FUNC1, and get a FUNC2 generator

Continue to execute …

Friends who are familiar with the algorithm and data structure encounter such a traversal logic that will be returned, can be recursively used, because the recursive use generator can not do it, we can use the stack, in fact, this is the word "call stack" origin.

With the stack, we can Connect all generators connected in series in the entire call chain to a generatorFor its constant Send, you can continue to get all IO operation information and drive the call chain advancement, and the implementation method is as follows:

  1. The first generator is in the stack

  2. Call the Send, if you get the generator, you will enter the next round iteration

  3. I encountered IO to ask Yield, let the frame sign up to iOLOOP

  4. After the IO operation is completed, the cache result is forth, enter the next round iteration, the purpose is to restore the upper function using IO results.

  5. If a generator is running, you also need to restore the upper function to the upper function.

The first generator is in the stack

Call the Send, if you get the generator, you will enter the next round iteration

I encountered IO to ask Yield, let the frame sign up to iOLOOP

After the IO operation is completed, the cache result is forth, enter the next round iteration, the purpose is to restore the upper function using IO results.

If a generator is running, you also need to restore the upper function to the upper function.

If implemented, the code is not long but the amount of information is relatively large.

It turns the entire call chain to a generator, calling the send, to complete the IO in the chain, complete these IO, continue to push the logic execution in the calling chain until the overall logic ends:

DEFWrapper (GEN):

# The first layer calls the stack

Stack=stack

Stack.push (gen)

# Start a layer-by-layer call

WHILETRUE:

# Get the top elements of the stack

item=stack.peak

Result=none

IFisgenerator (item):

TRY:

# Try to get the next layer call and get it in the stack

Child=item.send (Result)

Stack.push (child)

# Result Restore to NONE

Result=none

# After entering the stack, enter the next loop directly, continue to explore down

Continue

# If you have an end, you will temporarily save the Result, the next step to make yourself out.

Result=E.Value

Else: # o o operation

# # I 操作 操作, Yield, Io will be woken up and temporarily saved after IO

Result=yieldItem

# 走 到 here, this layer has been completed, out of the stack, the next iteration will be a layer of calling chain

Stack.pop

# 没有有 上, the entire call chain is completed, return

Ifstack.empty:

Print ("finished")

ReturnResult

This may be the most complicated part. If you look hard, it is actually as long as you understand that for the call chain in the example above, it can achieve the effect as follows:

W=Wrapper (Func1)

# Will get "IOJOB of http://test.com/foo"

W.send (none)

# 上 上 ojob foo completed the result "bar" incompart, continue to run, get "IOJOB OF http://test.com/bar"

W.send ("bar")

# 上 上 i i b 完成 完成 传 传 传 传 入 入 入 入 入 入 入 入 入 入 入 入 入

W.send ("BARZ")

With this part, the frame will be added to the matching code:

# Maintain a list of ready lists, store all completed IO events, format is (Wrapper, Result)

Ready=

# After using the wrapper package, you can process IO by Send.

g=wrapper (func1)

# Take the start state directly as the result of NONE

Ready.Append ((g, none))

# Let the iOLOOP perform this function each cycle to handle the ready for IO

Ready.Append ((g, result))

# Traversing all already generators, advance it down

Forg, Result InselF.Ready:

# Use the Result to wake the builder and get the next IO operation

IO_JOB=G.send (Result)

# After the IO operation is complete, add the generator to the ready list, wait for the next round of processing.

ask_LOOP.GET_EVENT_LOOP.IO_CALL (

IO_JOB, LambdareSult: Ready.Append ((g, result)

Here, the core idea is to maintain a list of ready-to-read, and IOLOOP is overwhelmed, and the generator that promotes the ready state is run down, and the new IO operation is registered. After the IO is completed, the ready, after several rounds of Ioloop iteration A Handler will eventually be executed.

At this point, we use the generator to write to write business logic to run normally.

0x04 Improved Scalability

If you can read it here, Python’s scope is basically clear.

We have already achieved one Miniature sweeping frameworkThe realization details of the standard library look great here, but the specific ideas are consistent.

Our equilation framework has a restriction, we can only use IO to operate asynchronously, although in the world of network programming and web programming, the block is basically only IO operation, but there are some exceptions, such as I want the current operation Sleep for a few seconds. The use of time.sleep will make the entire thread to block, requiring special implementation. For example, some CPU-intensive operations can be asynchronously through multi-threaded asynchronous, so that another thread notification event has been completed and followed.

Therefore, it is best to decouple an open space with the network, so that the network IO is only one of the scenes, improves the scalability.

The Python official solution is to let the user hand to block the block code. As for the IOLOOP to register IO event or open a thread completely by yourself, and provide a standard "placeholder" FUTURE, indicating that his results wait for the future Yes, some prototypes are as follows:

ClassFuture:

# Set the result

Defset_Result (Result): Pass

# 获取 结果 结果

Defresult: Pass

# 表示 表示 This Future object is not already set up.

Defdone: Pass

# Set the callback function that should be executed when he is set, you can set multiple

Defadd_done_callback (Callback): Pass

Our slight change can support Future, making the scalability stronger. Network request functions for user code:

# 现在 r es es,, 生 生 器 器 器 器 器 器 器 器

#future is understood as a placeholder

Fut=future

Defcallback (Result):

# Assign the placeholder when the network IO completed the callback

Fut.set_Result (Result)

ask_LOOP.GET_EVENT_LOOP.IO_CALL (URL, CALLBACK)

Now, Request is no longer a generator, but directly returns Future.

And for the function of processing the ready list in the frame:

DEFCALLBACK (FUT):

#future is set to be placed in the ready list

Ready.Append ((g, fut.result))

Fut=g.send (Result)

Fut.add_done_callback (callback)

0x05 development and change

Many years ago, when using Tornado, probably only one Yield keyword is available, the sweeper wants to achieve, that is, this idea, even Yield keywords and return keywords can not appear in a function, you want to run after the generator Returns a value, you need to manually Raise an exception, although the effect is the same as Return now, but it is still very awkward, not elegant.

Later, there was Yield from expression. What can it do?

It is popular, it is done what the generator Wrapper is doing the above: Calling link through the stack, it is the syntax of the Wrapper logic.

With it, the same example you can write:

Deffunc1:

# Note Yield from

Ret=yieldFromRequest ("http://test.com/foo")

# Note Yield from

Ret=yieldfromfunc2 (re)

returnret

Deffunc2 (DATA):

# Note Yield from

Result=yieldfromRequest ("http://test.com/"+data)

ReturnResult

# 同 上 上 实 实 实 实 实 r

Then you no longer need the brainless Wrapper function:

g=func1

# Return the first request for Future

g.send (none)

# Continue to run, automatically enter FUNC2 and get the FUTURE inside it

G.send ("bar")

# Continue to run, complete the residual logic of the call chain, throw the stopiteration exception

G.send ("BARZ")

Yield from the entire call chain directly, it is already great, but it is used asynchronous programming or otherwise, and other languages ??have special-top Async, the AWAIT keyword, until the later version puts these content With dedicated Async, AWAIT keyword packaging, it is a more elegant look today.

0x06 summary and comparison

Overall, Python’s native and trip is achieved from two aspects:

  1. Based on IO multiplexing technology, the entire application is non-blocking on IO, achieving high efficiency

  2. Change the dispersed Callback code through the generator into synchronous code, reducing business writing difficulties

Based on IO multiplexing technology, the entire application is non-blocking on IO, achieving high efficiency

Change the dispersed Callback code through the generator into synchronous code, reducing business writing difficulties

There is a language of the object of the generator. Its IO fight is achieved, the evolution of the Java fight is basically exactly, the keyword is the same, and the Future class is the same than the promise.

However, it is different for this, which is different from this sweeping with the degree of GO-named GO, and does not explicitly based on the generator.

If the class ratio, you can implement the geventime of Python, which is the runtime, and Patch off the system calls to access your own Runtime, you come to the scheduling sweeper, gevent is focused on the network, based on network IO scheduling, relatively simple, The GO achieves perfect multi-core support, more complex and perfect, and creates a new CHANNEL new programming paradigm.

Author: Mao bean peanut

Getting Started: The Most Complete Zero-Basic Python Problem | Zero-Based 8 Months Python | Battle Project | Learning Python is this shortcut

Dry goods: crawling Douban short comment, movie "later we" | 38 years old NBA best player analysis | From Wanzhong to Word! Tang Dynasty 3 disappointment | Laughing to see the new Eti Dragon Slay Dollar | Running Question King | Make a massive Miss in Python Sketch | Disc, I am so fire, I use machine to learn to be a mini recommended system movie

Fun: Poultry Game | Nine Mao Hai | Beautiful Flower | Two-Article Python "Everyday Cool" game!

AI: Robot that will be poetry | Give the picture color | predictive income | Solver, I use the machine to learn to be a mini recommended system movie

Gadget: PDF to Word, easy to get forms and watermarks! | One button saves the HTML page as PDF! Goodbye PDF to extract charges! Use 90 lines of code to create the strongest PDF converter, Word, PPT, Excel, Markdown, HTML one-to-date conversion | Make a staple low-cost ticket prompt! | 60 lines of code made a voice wallpaper switch every day to see a little sister! |

Annual explosion case

  • 1). Lying! PDF to Word Use Python to easily get it!
  • 2) Learn Python is really fragrant! I made a website with 100 lines of code, helping people PS travel pictures, earn a chicken leg to eat
  • 3). The first broadcast over 100 million, hot all net, I analyzed the "Sister Taking Wind and Waves" and found these secrets
  • 4) 80 lines of code! Do a Dream in Python with Python
  • 5). You must master the 20 Python code, short and delicate, useless
  • 6). 30 python hambo skills
  • 7). I summarized 80 "rookie Python selection dry goods.pdf", all dry goods
  • 8). Goodbye Python! I want to learn Go! 2500 word depth analysis!
  • 9). Find a dog welfare! This Python reptile artifact is too cool, automatically download the girl picture