Reading time ( words)
I caught up with Pavel Zivny, a system engineer at Tektronix in Portland, Oregon. Pavel was happy to share some of the technical aspects of the paper he was presenting at DesignCon.
Nolan Johnson: Pavel Zivny, there are a couple of reasons I’d like to speak with you here at DesignCon. First, you’ve got a paper that you’ve just presented here on an interesting topic, and you’ve got a role with DesignCon on creating the technical tracks?
Pavel Zivny: I’m a Tektronix employee, and I’m part of the IEEE 1023 standards effort. At DesignCon, I have several interests. I am here to find out what is interesting to people and present my own papers, but also for the next year, I’m part of the committee which selects the tracks.
As with many conferences, there can be quite a lot of inertia. Some tracks are there for a while and they stay that way. And as long as people show up, we consider them healthy.
Sometimes the show does push toward doing something slightly new because we perceive that the industry should be moving some direction. It used to be that if you came with optical interest to DesignCon that just meant you were really intending to go OFC. It’s not like that anymore. There are both papers and many interested people from the industry who want to hear about optical news and development in optical measurements. At OFC, the academic interests cover really deep trenches of research, which has been going on for many years in some optical subjects.
Johnson: You’re confirming what I was noticing; not only about the tracks, but also what people are here talking about. There’s been a shift with DesignCon. Years back, this conference was focused strictly on design. Now you look at the content and it’s increasingly about engineering. There’s a lot of signal integrity, crosstalk, and techniques for high speed and high-frequency. It’s much more of an engineering agenda than it was before, making it much more useful for assemblers and fabricators.
Zivny: I think that there is widening. There are both the EDA conferences and then, the ISSCC in San Francisco. What really worked out well for DesignCon is that it’s a show and a conference; signal integrity is everywhere, right? It’s becoming more critical that you really take signal integrity as a first focus, and you have to design to that.
That helped, I think, in making this aspect of the show stronger. It’s also nice to see that some of the adjacencies we may have ignored a few years back, such as machine learning, are getting some traction now. Obviously, in the industry, machine learning moved from computer academia to some real deployments, some real applications.
Johnson: Right. And that’s what your paper was on, if I recall.
Zivny: That’s right. The focus of the paper is a measurement called TDECQ. There was also a panel. I should say a little bit about each.
In the past, we used to have measurements based on eye diagram of optical systems, and now we diverged from just plain old eye diagram. And the way optics develops is that people mostly use something called TDECQ, which on the surface looks completely different, but it’s not that different. It looks at the BER eye opening in a particular way. The eye diagram is under there or deep in it anyway, but it’s the BER eye or the function of sort of voltage eye, or power eye. There is a recalculation going on and the measurement cleverly compensates for the oscilloscope noise, and those two things together mean that there is quite a lot of computational complexity.
The measurement is really expensive in computer resources. Literally you get the hottest PC, you are still somewhere in the three to tens of seconds time, depending how accurate you want your result to be, which is okay in design and characterization, but it’s painful on the manufacturing floor when you need to adjust the device at many different temperatures and power supplies, etc. This is a problem for manufacturing; this measurement is costing them a lot of oscilloscope time. That’s their most expensive equipment. We did try to speed it up with machine learning, more or less teaching the machine learning algorithm about what signals are the good ones, what are the bad ones based on our standard of it, and we did not expect too much in the beginning. There were several problems, and the machine learning team was actually in Georgia Tech. This is a cooperation between Tektronix and Georgia Tech, with Dr. Steven Ralph and his team.
In the beginning, the whole process was slightly slower, and we were not sure this was going to work. We dipped our toe in the water. They did find several ML (Machine learning) algorithms which worked out and in particular, the waveform processing one is the one I liked. We pursued it some more and you get to the point where this just measures TDQ very well. We had a very interesting case. We had a bug in our own TDQ algorithm and there was a waveform which generated an erroneous result in our own algorithm and the well-learned machine learning network caught that. It did not repeat that error because it was out of a correlation of all the other learning examples. The machine learning showed that it’s really solid enough that one anomaly in the learning process doesn’t break the system, and we were able to fix the software because of the flag we got on that one case.
It was a good experience working with academia like that, which it pretty much always is for us; it was really productive for both of us. The industry usually has less time to do high quality research and it doesn’t have the time to really do broad research, and vice versa, the academia is straining to find what is really relevant in the industry. In that way, this cooperation connected the two strengths very nicely. I was glad about that. So hopefully we’ll see this making it into the future – that’s still to be determined, but the results are good. So we can show that this is not academic interest only, not the dreaded “don’t do this at home because it will not work if you cross your legs wrong” kind of problem (laughs). It’s pretty solid.
Johnson: Right, and that bodes well for being able to continue to move more machine learning into the manufacturing environment.
Zivny: I agree with that. I think that the measurements are sometimes expensive enough that you are looking for any improvements you can get. There are other opportunities, obviously, and one wants to be careful. There are some things you can do today, and there are some things you can think about doing. Maybe you can pull them off, but not everything you can do is always a welcome solution to the users. What seems to be a brilliant idea in a planning session sometimes doesn’t quite work out.
We did try, for example, to identify what problem might there be in the signal. Is this your DUT’s bandwidth? Is it your noise level? And sort of widen the machine learning toward that. And it seems to me at this point that’s maybe less important to the industry. It’s something that everybody wants to talk about a little bit, but when you ask them, “So how many of those would you buy?” They start looking at the ceiling and you realize, “Okay, so this is an issue, but it’s either too broad, too early, or maybe plain just not important enough.” So, it’s interesting but maybe not critical.
Johnson: Still finding its legs?
Zivny: Exactly. One wants to find those where it heals the hurt and focus on that. And just like with many technologies, that’s going to be an ongoing process.
Johnson: Great. Well, thanks for taking the time to talk with us, Pavel.
Zivny: Thanks for having me and enjoy the rest of the show.
Johnson: Thank you.