Selective Language

  1. English
  2. 繁体中文
  3. Беларусь
  4. Български език
  5. polski
  6. فارسی
  7. Dansk
  8. Deutsch
  9. русский
  10. Français
  11. Pilipino
  12. Suomi
  13. საქართველო
  14. 한국의
  15. Hausa
  16. Nederland
  17. Čeština
  18. Hrvatska
  19. lietuvių
  20. românesc
  21. Melayu
  22. Kongeriket
  23. Português
  24. Svenska
  25. Cрпски
  26. ภาษาไทย
  27. Türk dili
  28. Україна
  29. español
  30. עִבְרִית
  31. Magyarország
  32. Italia
  33. Indonesia
  34. Tiếng Việt
  35. हिंदी
(Click on the blank space to close)
HomeNewsWhy are chips so difficult to make?

Why are chips so difficult to make?

Jan03
In the digital age, all of our lives are inseparable from chips. Our computers, mobile phones, and even the cars we travel in are all equipped with a large number of chips. As long as one chip fails to work properly, it will affect our lives, ranging from a mobile phone malfunctioning to a car losing control...
While enjoying the convenience of chips, have we ever thought about why chips are so important to the digital age? Why is its development and manufacturing so difficult? This also starts with the history of chips.

From vacuum tubes to transistors
"In ancient times, we tied ropes to rule." Computing has been an integral part of our lives since the dawn of human civilization. From the balance of payments of a family to the economic direction of a country, these numbers that determine the fate of a family or a country all require calculations to arrive at. People have developed many calculation tools for this purpose, such as abacus that moves beads up and down, or calculators that can press buttons to obtain the desired results.

As our computing needs continue to increase, human-based computing methods quickly encounter bottlenecks. The war gave birth to the birth of early computers: Turing developed a computer based on electromechanical principles to crack the German Enigma code; and in order to crack the German Lorenz code, the British developed the "Colossus computer" , which is also considered the world's first programmable digital computer. These machines can easily perform calculations that are difficult or even impossible for humans alone.

The core of the giant computer is a "vacuum tube", which looks like a huge light bulb with some metal wires inside. After being powered on, these metal wires have two fates: having electricity or not having electricity, which corresponds to the 1s and 0s in binary. With these two numbers, any calculation can theoretically be done. Our current online virtual world can also be roughly understood as being born from countless 1s and 0s.

Although computers based on vacuum tubes were powerful, they also had many limitations. For one thing, vacuum tubes are too bulky. The ENIAC machine manufactured by the University of Pennsylvania has more than 17,000 vacuum tubes, which occupies a huge area and consumes terrible power. On the other hand, these massive numbers of vacuum tubes also bring various hidden dangers. According to statistics, on average, a vacuum tube failure occurs on this machine every 2 days, and each troubleshooting takes at least 15 minutes. In order to steadily produce various ones and zeros, people began to look for alternatives to vacuum tubes.

The famous Bell Labs made a breakthrough, and their choice was a semiconductor - a material whose conductivity is based on conductors (which allow electric current to pass freely, such as copper wires), and insulators (which do not conduct electricity at all, such as glass) between. Under certain conditions, its conductive properties can change. For example, the "silicon" (Si) we have all heard of is not conductive by itself, but as long as certain other materials are added, it can become conductive. This is where the name "semiconductor" comes from.
William Shockley of Bell Labs first proposed a theory that adding an electric field near semiconductor materials could change their conductivity. However, he was unable to confirm his theory experimentally.

Inspired by this theory, two of his colleagues, John Bardeen and Walter Brattain, built a semiconductor device called a "transistor" two years later. Unwilling to be surpassed, Shockley developed an updated transistor a year later. Ten years later, the three of them won the Nobel Prize in Physics for their contributions to the field of transistors. As the field of transistors continues to expand and welcome more new members, they have also become the cornerstone of the digital age.

Chips and the birth of Silicon Valley

As transistors gradually replaced vacuum tubes, their limitations were exposed in practical applications. Chief among them is how to wire thousands of transistors into a usable circuit.

In order for the transistor to achieve complex functions, in addition to the transistor, the circuit also needs resistors, capacitors, inductors and other components, and then welding and circuit connections are required. There is no standard for the size of these components themselves, making circuits a huge workload and prone to errors. One solution at the time was to specify the size and shape of each electronic component and use modular means to redefine the circuit design.

Texas Instruments' Jack Kilby was not enthusiastic about the plan, saying it would not solve the fundamental problem - no matter how specified, the size would not be small. The resulting modular circuit is still bulky and cannot be applied to smaller devices. His plan integrates everything, putting all transistors, resistors and capacitors on a piece of semiconductor material, saving a lot of subsequent manufacturing time and reducing the possibility of mistakes.

In 1958, he made a prototype using "germanium" (Ge), which contained a transistor, three resistors and a capacitor. After being connected with wires, it could produce a sine wave. This brand-new circuit was called an "integrated circuit", and later had a more familiar abbreviation - chip. Kilby himself won the Nobel Prize in Physics in 2000 for his invention.

At about the same time, eight engineers resigned from Shockley at the same time, and then started their own business together, establishing Fairchild Semiconductor. These eight resignations are the famous "eight rebels" in the history of semiconductors. Robert Noyce, the leader of these eight rebels, also had the idea of producing multiple components on a piece of semiconductor material to create integrated circuits. Unlike Kilby's approach, his design integrated wires and components into one piece. This integrated design has greater advantages in production and manufacturing. The only problem is the cost - although Noyce's integrated circuit has obvious advantages, the cost is 50 times higher than the original.

Just as the war decades ago gave birth to the prototype of computers, the Cold War also brought unexpected business opportunities to Noyce's chips. As the former Soviet Union launched the first artificial satellite and sent humans into space for the first time, the United States, sensing the crisis, launched a comprehensive catch-up plan. They decided to send people to the moon as a final counterattack. However, this task required a huge amount of calculations (controlling the rocket, operating the landing cabin, calculating the optimal time window, etc.), and the United States Space Administration (NASA) bet its fate on No. On Yisi's chip: This integrated circuit is smaller and consumes less power. In order to send a man to the moon, every gram of weight and every watt of energy must be carefully considered. For this kind of extreme project, it is undoubtedly a better choice.

In the human moon landing project, the chip showed its potential to the world - Noyce said that in the computer of the Apollo project, its chip ran for 19 million hours and only failed twice, one of which Or it is caused by external factors.

In addition, the moon landing also proved that the chip can operate normally in the extremely harsh environment of outer space. After the rise of Fairchild, employees from this company also branched out locally and established companies such as Intel and AMD. This area densely populated by semiconductor companies later got a more famous name - Silicon Valley.

Lithography
The size of integrated circuits is much smaller than that of circuits composed of scattered transistor components. A microscope is often required to see the structure inside and check the quality. Jay Lathrop of Texas Instruments came up with a sudden idea during an observation. If a microscope can magnify things when looking from top to bottom, can it make things smaller when looking from bottom to top? Woolen cloth?

This isn't just for fun. At that time, the size of integrated circuits was close to the limit of manual manufacturing, and it was difficult to achieve new breakthroughs. And if the designed circuit diagram can be "reduced" onto semiconductor materials, it will be possible to manufacture it through automated technology and achieve mass production.

Lathrop quickly put his idea to the test. First he bought a chemical called photoresist from Kodak and applied it to the semiconductor material. Then he turned the microscope upside down as he imagined and put a plate over the lens, leaving only a small pattern.

Finally, he let the light pass through the lens and hit the photoresist on the other end of the microscope. Under the action of light, the photoresist undergoes a chemical reaction and slowly dissolves and disappears, exposing the silicon material underneath. The shape of the exposed material was exactly the same as the pattern he originally designed, but it was reduced hundreds to thousands of times. On the exposed grooves, fabricators can add new material, connect circuits, and then wash away excess photoresist. This set of processes is the photolithography technology for manufacturing chips.

Subsequently, this process was further improved so that each link could have standards for reference, which also ushered in the era of standardized mass production of integrated circuits. As chips become more and more complex, making an integrated circuit requires repeating this process at least dozens of times.
Fairchild followed suit and developed its own photolithography production technology. In addition to Noyce, the other seven founders who established the company were also not ordinary people. Among them, Gordon Moore is one of the best.

In 1965, he predicted the future of integrated circuits, believing that as production technologies such as photolithography continue to be updated, the number of components in a chip will double every year. In the long run, the computing power of chips will increase exponentially, and the cost will also drop significantly. An obvious consequence of this is that chips will enter the homes of ordinary people in large numbers and completely change the world. Moore's prediction was later called "Moore's Law" and became known to the world.

The premise for the establishment of Moore's Law is the continuous development and innovation of manufacturing processes. The photolithography technology developed by some early companies was almost perfect. It was like drawing light on the photoresist stroke by stroke, carving out lines with a width of only one micron. Moreover, this technology can also engrave multiple chips at one time, greatly increasing chip production capacity. However, with the ever-increasing demand for chip manufacturing accuracy, micron-level lithography machines are no longer able to meet the needs of the industry, and nano-level lithography machines have become the new favorite.

But developing such a lithography machine is not easy—how to perform lithography in increasingly smaller mini-spaces has become a bottleneck hindering the development of lithography technology.

Extreme Ultraviolet Lithography
In 1992, Moore's Law was about to expire - if it was to be maintained, chip circuits would need to be made even smaller. Whether it is the light source used or the lens that is illuminated, there are new requirements.
When Lathrop first developed lithography, he used visible light, the simplest form of it. The wavelength of these lights is around a few hundred nanometers, and the ultimate size printed on the chip is also a few hundred nanometers. And if smaller components need to be printed on the chip (for example, only a few tens of nanometers), the light source required must also exceed the limit of visible light and enter the field of ultraviolet light.

Some companies have developed manufacturing equipment that uses deep ultraviolet light (DUV), which uses wavelengths less than 200 nanometers. But in the long term, extreme ultraviolet light (EUV) is the field that people want to reach - the shorter the wavelength, the more details that can be engraved on the chip. Ultimately, the target was extreme ultraviolet light with a wavelength of 13.5 nanometers, and ASML in the Netherlands became the world's only producer of EUV machines.

EUV technology has been developed for nearly 20 years. To build a functioning EUV machine, ASML needs to source state-of-the-art parts around the world to meet its needs. As a lithography machine, the first thing you need is a light source: in order to generate EUV, people need to launch a tin droplet with a diameter of only a few tens of microns, let it travel through the vacuum at a speed of more than 300 kilometers per hour, and at the same time hit it accurately with a laser - — not once, but twice.

The first time is to heat it, and the second time is to blast it into plasma with a high temperature of 500,000 degrees, which is several times the surface temperature of the sun. This process must be repeated 50,000 times per second to generate enough EUV. You can imagine how many advanced components are needed for such high-precision technology.

Actual operation is more complicated than described above. For example, in order to eliminate the large amount of heat generated during laser irradiation, a fan needs to be used for ventilation, and the rotation speed needs to reach 1,000 times per second. This speed has exceeded the limit of physical bearings, so magnets are needed to hover the fan in the air for rotation.

In addition, the laser transmitter has strict requirements on the gas density in it, and it is also necessary to avoid reflection of the laser light on the tin droplets, which will affect the instrument. Just developing the machines to emit the lasers took more than 10 years of research and development, requiring more than 450,000 components per emitter.

The EUV generated after bombarding tin droplets is hard to come by, and researchers still need to learn how to collect these lights and guide them to the chip. The wavelength of EUV is so short that it is easily absorbed by surrounding materials rather than reflected. Eventually the company Carl Zeiss developed an extremely smooth mirror that can reflect EUV.

The smoothness of this mirror is beyond imagination - in official terms, if this mirror were enlarged to the size of the entire Germany, the largest irregularity in the mirror would only be 0.1 mm. The company is also confident that its mirror can guide the laser to accurately hit a golf ball on the moon.

Such a complex set of equipment requires not only science and technology, but also complete management of the supply chain. ASML itself produces only 15% of the components for its EUV machines, with the rest coming from partners around the world. Of course, they will also carefully monitor these purchased products, and if necessary, even buy these companies and manage them themselves. Such a machine is the crystallization of technology from different countries.

The prototype of the first EUV machine was produced in 2006. In 2010, the first commercial EUV machine was shipped. In the next few years, ASML is expected to launch a new generation of EUV machines, each costing $300 million.

Application of Chips With advanced manufacturing processes, a variety of chips have been born. Some people have concluded that in the 21st century, chips can be divided into three major categories.

The first is a logic chip, used as the processor in our computers, mobile phones, or network servers;
The second category is memory chips. Classic examples include DRAM chips developed by Intel. Before the launch of this product, data storage relied on magnetic cores: magnetized components represented 1, and unmagnetized components represented 0. Intel's approach is to combine a transistor and a capacitor. Charging represents 1, and not charging represents 0. Compared with magnetic cores, the new storage tool has a similar principle, but everything is integrated into the chip, so it is smaller and has a lower error rate. Such chips provide short-term and long-term memory for computers to run;
The third type of chip is called "analog chip" and processes analog signals.

Of these chips, logic chips are probably better known. Although Intel developed the earliest DRAM memory chips, it was losing ground to Japanese companies. In 1980, Intel entered into a partnership with IBM to manufacture central processing units, or CPUs, for personal computers.

With IBM's first personalWith the advent of personal computers, Intel's processors built into this computer have become the "standard equipment" in the industry, just like Microsoft's Windows system has become a more familiar operating system to the public. This gamble also allowed Intel to completely withdraw from the DRAM field and re-emerge.

CPU development did not happen overnight. In fact, as early as 1971, Intel created the first microprocessor (compared to a CPU, which can only handle a single specific task). The development of the entire design process took half a year. At that time, this microprocessor had only thousands of components, and the design tools used were only colored pencils and rulers, making it as backward as a medieval craftsman. Lynn Conway developed a program that solved the problem of automated chip design. Using this program, students who have never designed a chip can learn how to design a functional chip in a short period of time.

In the late 1980s, Intel developed the 486 processor, which put 1.2 million tiny components on a tiny silicon chip to generate all kinds of 0s and 1s. By 2010, the most advanced microprocessor chips could carry a billion transistors. The development of this kind of chip is inseparable from the design software developed by a few oligarch companies.

Another type of logic chip - graphics processing unit (GPU, commonly known as graphics card) has also attracted increasing attention in recent years. In this area, Nvidia is an important player. In its early days, the company believed that 3D graphics were the way of the future, so it designed a GPU that could handle 3D graphics and developed a corresponding set of software to tell the chip how to work. Unlike Intel's CPU's "sequential calculation" model, the advantage of the GPU is that it can perform a large number of simple operations at the same time.

No one thought that in the era of artificial intelligence, GPU would have a new mission. In order to train artificial intelligence models, scientists need to use data to continuously optimize algorithms so that the models can be trained to complete tasks assigned by humans, such as identifying cats and dogs, playing Go, or talking to humans. At this time, the GPU developed to "parallel process" data for multiple operations at the same time has unique advantages, and it has also taken on a new life in the era of artificial intelligence.

Another important application of chips is communications. Irwin Jacobs saw that chips could process some complex algorithms to encode massive amounts of information, so he and his friends founded Qualcomm to enter the communications field. We know that the earliest mobile phone was also called Big Brother and looked like a black brick.

Subsequently, communication technology has developed rapidly - 2G technology can transmit graphics and text, 3G technology can open websites, 4G is enough to watch videos smoothly, and 5G can provide an even bigger leap. Each G here represents "generation". It can be seen that with each generation of wireless technology, the information we transmit through radio waves increases exponentially. Nowadays, when we watch videos on mobile phones, we feel impatient if there is a slight lag. Little did we know that more than 10 years ago, we could only send text messages.

Qualcomm participated in the development of subsequent 2G and other mobile phone technologies. Using chips that continue to evolve in accordance with Moore's Law, Qualcomm can put more mobile phone calls into infinite space through unlimited spectrum. In order to upgrade the 5G network, not only do new chips need to be put into mobile phones, but new hardware also needs to be installed in base stations. With more powerful computing power, these hardware and chips can transmit data faster wirelessly.
Manufacturing and supply chain

In 1976, nearly every company designing chips had a manufacturing base. However, if the work of chip design and chip manufacturing is separated and the chip manufacturing work is handed over to a specialized foundry, the cost of the chip design company can be significantly reduced.

TSMC came into being and promised to only manufacture chips, not design them. This way, companies designing chips don't have to worry about confidential information being leaked. And TSMC doesn't rely on selling more chips - as long as its customers are successful, his company is successful.

Before TSMC, some American chip companies had set their sights on the vast Pacific Ocean: in the 1960s, Fairchild established a center in Hong Kong to assemble various chips shipped from California. In the first year of production, the Hong Kong factory assembled 120 million units, with extremely low labor costs but excellent quality. Within a decade, nearly all U.S. chip companies had established assembly plants in Asia. This also laid the foundation for the current chip supply chain pattern centered on East Asia and Southeast Asia.

Asia's high efficiency and obsession with quality soon had an impact on the United States' position in the chip industry. In the 1980s, company executives responsible for testing chip quality unexpectedly discovered that the quality of chips produced in Japan had surpassed that of the United States - the failure rate of ordinary American chips was 4.5 times that of Japanese chips, and the failure rate of the worst-quality American chips 10 times that of Japanese chips! "Made in Japan" is no longer synonymous with cheap but low-quality products. What’s even more frightening is that even the American production lines, which are squeezed to the limit, are far less efficient than Japan. "The cost of capital in Japan is only 6% to 7%. At my best, the cost was 18%." AMD CEO Jerry Sanders once said.

The financial environment also contributed to the situation: In order to curb inflation, the interest rate in the United States was as high as 21.5% at one time; in Japan, chip companies were supported by consortiums, and people were accustomed to saving, allowing banks to provide chip companies with large low-income loans in the long term. interest loan. With the help of capital, Japanese companies can aggressively seize the market.

As a result of this ebb and flow, companies that are ultimately capable of producing advanced logic chips are concentrated in East Asia, and the manufactured chips are then sent to surrounding areas for assembly. For example, Apple's chips are mainly produced in South Korea and Taiwan, and then sent to Foxconn for assembly. These chips include not only the main processor, but also chips for wireless networks and Bluetooth, chips for taking pictures, and chips for sensing motion, etc.

As the ability to produce and manufacture chips is gradually concentrated in a few companies, these original foundry companies have also gained greater power, such as coordinating the needs of different companies and even formulating rules. Since the company currently responsible for designing chips does not have the ability to manufacture chips, it can only follow suggestions. These growing powers are also one of the topics of current geopolitical struggle.

Conclusion
From the machine that decrypted World War II codes to the spacecraft that sent men to the moon. From the Walkman that plays music everywhere, to the planes and cars we use every day, to the phones and computers we use to read this article, these devices are all inseparable from chips.

Every day, every ordinary person uses at least dozens or hundreds of chips in their lives. All of this is inseparable from the development of chip technology and the production and manufacturing of chips. Chips are one of the most important inventions of this era. To develop new chips requires not only the support of science and technology, but also advanced manufacturing and production capabilities, as well as the civilian market for applying these chips.

The layout of chip design and manufacturing capabilities has formed the current pattern after decades of changes, and it has also taken on a different meaning in this era.

For more electronic components requirements, pls see:https://www.megasourceel.com/products.html


MegaSource Co., LTD.