Friday, May 20, 2011

MEMS can benefit the largest interconnected machine on earth

I never thought about it before – but the U.S. power grid is the largest interconnected machine on earth, noted Igor Paprotny, post-doc researcher at the Berkeley Sensor & Actuator Center (BSAC). The grid has 9,200 generating units, 1,000,000MW capacity and 300,000 miles of transmission lines. This aging behemoth would benefit from having sensors that report on system status and health. The urgency of need is evident given data Paprotny presented: there has been a 126% increase in non-disaster related blackouts affecting at least 50,000 customers (reported on CNN, 8/9/10). The estimated loss from the Northeast blackout in 2003 alone was $6B.

Researchers at BSAC have developed a self-powered wireless MEMS sensor. Their long-term goal is to use it for ubiquitous power systems sensing, especially as they further develop their sticky-tab meter, i.e., one literally sticks the MEMS sensor onto a location, making installation very inexpensive. Some of the applications include: modules that measure flow of power in the grid, underground cables that report on their condition, wireless electric meters, and equipment status ID chips. The group has already devised a project whereby the sticky tab modules are placed on top of circuit breakers in Cory Hall at UC-Berkeley.

The version 1.0 form factor of the MEMS sensor module is 3.5cm X 1.5cm. Version 4.0 (the sticky-tab) will be much smaller. Among the challenges the researchers are pursuing: AC scavenging/overcurrent protection, and determining whether or not the sensor degrades equipment performance. They also need to determine if the sensor will work for 40+ years. (Debra Vogler, Sr. Technical Editor)

----------------------------------
2 comments:
Anonymous
May 31, 2011 08:01 PM
it WOULD benefit from having sensors that report on system status and health.

http://www.sdwebworks.com


Andrew
Jan 25, 2012 12:25 AM
You think this MEMS sensor gonna work for 40 years. I doubt. It will be the same things like the previous one. Blackouts...

Tuesday, May 10, 2011

Who has the toughest ITRS road?

A recent half-day SEMI Northeast Forum at the U. of Albany's College of Nanoscale Science and Engineering (CNSE) explored where the ITRS is leading manufacturers and suppliers, from four viewpoints: metrology, contamination control, lithography, and backend. One overarching thought coming away from the excellent talks: there's a lot of work to be done in each of those areas... so who's got the toughest challenges ahead of them?

On the ITRS Roadmap, conference papers appear as early as 12 years prior to device/material productization, process research tools show up to eight years prior to production, and development on alpha tools is usually ~2 years prior. But metrology requirements precede the actual research tool, e.g. to figure out the ins & outs of defect detection, so usually researchers have to make do with something suboptimal for R&D, explained Alain Diebold from Albany CNSE. (A good example of this is EUV: mask substrate defect inspection, mask blank inspection, AIMS, patterned mask inspection.)

Looking at the 2010 metrology roadmap, there's a lot of yellow (indicating known solutions but not yet meeting requirements, i.e. tool matching) in metrology for litho, frontend processes (transistors and capacitors, equivalent oxide thickness), and interconnect. But for 16nm (~2016) and 11nm (2019) almost all of those yellows turn to red, meaning no solution pathway has been identified.


Like metrology, wafer contamination has many crossovers in other areas of semiconductor manufacturing: emerging materials, front-end processes, litho, interconnects, EHS, and metrology, explained Christopher Long, senior engineering and program manager at IBM Research, summarizing efforts within the ITRS' Yield Enhancement technology working group (TWG). The tightest cleanroom requirement by ISO standards (Class 1) is fewer than ten 100nm particles per cubic meter (Class 2 is ≤100 particles) -- but that still leaves a window of tons of particles "which are potential killers," he noted. And not everyone is at ISO Class 1, or even feels they need to be -- resigned, Long quipped, to "solve no disaster before its time."

Sub-100nm (ultrafine/nano) particles are not measurable with optical particle counters, instead requiring condensation nucleus counters, Long noted. Another big challenge: quantifying and characterizing nanoparticles and their generation sources in the wafer environment (e.g. outgassing). Monitoring and identifying yield-crimping defects and process issues at the wafer edge/bevel also shows promise, particularly in the wafer "E" region.

Fab issues related to airborne molecular contamination (AMC) are well-documented, and IDMs' key challenge is to figure out what level to control to (without overkill), and do it rapidly (i.e. not weeks). The ITRS is resolved to try to not "overspec the values" and instead reset with numbers "a little more based in reality" i.e. low-PPB (parts/billion) instead of PPT (parts/trillion). Also there needs to be a better understanding of how and why some particles stick/agglomerate and why others don't, and what this tendency means to the rest of the surrounding environment.

Other areas the TWG is getting its arms around: Ultrapure water (better understanding of particle measurement capabilities ≤65nm, impacts/spec levels for critical organics); liquid chemicals (which ones are and are not harmful to the process); and precursors (understanding and control, coordinated with SEMI Precursor task force and other TWGs for FEP and Interconnect). Wafer static charge/electrostatic discharge due to particles/ionization also is of interest, as it can lead to ESD damage and process interruptions, what Long called a "ghost in the tool."

Assume critical defect size (CDS) 11.3nm, 1/x3 defect size distribution. (Left) For 2016/22.5nm node, # of particles scaled to CDS: Class 1 = 7831, Class 2 = 783. (Right) For 2021/12.6nm node, # of particles scaled to CDS: Class 1 = ~25,000, Class 2 = 2520.

Bryan Rice, SEMATECH director of lithography, was quick to point out that the ITRS is essentially just a rallying call to bring disparate groups of suppliers to a common ground, and is subject to politics and academic consideration. It can't be called "wrong" on "failing" to hit some timeline points, but rather pushouts are just poor forecasting; he admonished working groups to be held responsible for more accurate estimates and better harmonizing industry efforts. It seems accountability is the key -- if the ITRS is wrong, well, nobody "loses," he said -- but if ASML's EUV roadmap is wrong, they'll lose billions of dollars! (During the Q&A, Bill Tobey noted that to its credit "ASML solved lots of unsolvable problems" on its own; Rice countered that the challenges facing EUV are too much for one company alone, and ASML stayed out of the game until it had five customers for preproduction.)

Identifying/managing defects in EUV lithography is "massively" behind, Rice noted, partly because there's no money to be made (SEMATECH is funding AIMS blank inspection efforts). "We need to create a business solution before there can be a technical solution," he said. The industry needs imaging and chemical characterization of ≤0nm defects ("we can't TEM every defect"), and new technologies to clean them. From a materials standpoint, we're "a long way off" with linewidth roughness, with possible novel solutions in rinse and etch chemistry.

Meanwhile, the current EUV-postponing alternative, 193nm immersion with multipatterning, still needs a fast mask writer, improved LWR, thin resists for better aspect ratio, better etch resistance resists, and brighter sources to support slower resists. A brighter 8 W/mm2/sr source for mask metrology is available, but 100 W/mm2/sr is needed ASAP, Rice said. (Speaking of oft-maligned source power, that needs to improve by a factor of ten within the current calendar year to 100W, and then to 250W for a volume-supporting 125WPH throughput.)


Greg Smith of Teradyne noted how his company (and the backend in general) must balance the technology/ITRS requirements with what the customer base needs. New processes and technologies such as through-silicon vias (TSV) offer multitude advantages, but figuring out how to work with them isn't easy, he noted; they have complex assembly steps and "new classes of faults that we're not familiar with." How does one test for these new types of faults, which can come from the TSV itself (e.g. voids and oxide pinholes), from bonding (e.g. misalignment and height variation), or from wafer thinning (I-V degradation)? How to contact devices for tests, can it be done before assembly? "Current tools are not equipped to do lots of this." Smith mentioned a new test design with a very high angle of attack and large z-axis movement (the chuck moves down, then over, then up again) to get all the way down into those tall shallow structures. And to avoid testing every TSV, one could test each stack as if it was in an application (e.g. making a phone call) to determine quality.

Another concern about TSVs, pointed out Diebold during the Q&A, is a need for standardization; chips from different suppliers can have TSVs in different locations depending on the type of device (e.g. memory), so connecting these different chips is a problem. Smith added that vendors tend to view yield data as proprietary, while sharing it could help greatly improve challenges in wafer contamination or yields.

Can we judge whether one area has a tougher road(map) to navigate? Are the challenges faced and met by metrology the most crucial in several areas? How about contamination control? What of lithography, the industry's workhorse and target of much metrology focus? And what about the backend, which needs to find the balance with device structures coming down the manufacturing line, its own innovations, and customer requirements?

Who do you think has the toughest path ahead, and the best chance to get through it successfully?