kb8tq@n1k.org said:
The astonishing part of this “new world” is that a very complex chip that is
made in high volume is cheaper than a handful of less popular (but far less
complex) chips.
It would be interesting to see the die sizes.
Another advantage of the CPU solution is that you can make a large class of
changes by just tweaking the software. For example changing the input from
10 MHz to 5 MHz or 1 MHz. That's also a disadvantage - somebody has to write
the software.
Adding software to a project adds another layer of management problems. If
the software is really simple that's not much of a problem, you write it once
and debug it and then you don't have to fix any bugs. But software easily
gets complicated which means bugs, and hardware guys are often poor at
software engineering and/or project management when software is involved.
(Software geeks are usually bad at it too.)
--
These are my opinions. I hate spam.
Hi
On Jan 16, 2016, at 3:00 AM, Hal Murray hmurray@megapathdsl.net wrote:
kb8tq@n1k.org said:
The astonishing part of this “new world” is that a very complex chip that is
made in high volume is cheaper than a handful of less popular (but far less
complex) chips.
It would be interesting to see the die sizes.
Unlike the world of lithography, the dicing process has not made a lot of progress.
Decades ago a 1mm x 1 mm die was about as small as you could get. From what
I can see that has not dropped by more than a factor of two in 40 years (if at all).
Yes, there’s a lot more to it than just a dicing saw. Things like bond wire attach
also figure in. It still takes a certain size bond wire to carry a practical amount
of current …
The net result could be a process that does a gate or function in < 1% of the available
area. Everything else is just empty space along for the ride (or to provide attach
points).
Bob
Another advantage of the CPU solution is that you can make a large class of
changes by just tweaking the software. For example changing the input from
10 MHz to 5 MHz or 1 MHz. That's also a disadvantage - somebody has to write
the software.
Adding software to a project adds another layer of management problems. If
the software is really simple that's not much of a problem, you write it once
and debug it and then you don't have to fix any bugs. But software easily
gets complicated which means bugs, and hardware guys are often poor at
software engineering and/or project management when software is involved.
(Software geeks are usually bad at it too.)
--
These are my opinions. I hate spam.
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
In message 20160116080037.13903406057@ip-64-139-1-69.sjc.megapath.net, Hal Murray writes:
kb8tq@n1k.org said:
The astonishing part of this “new world” is that a very complex chip that is
made in high volume is cheaper than a handful of less popular (but far less
complex) chips.
It would be interesting to see the die sizes.
Die size is not really an issue until they become big enough to impact overall yield.
And apropos: I just used a LPC810, to do 5MHz to 1Hz for my HP5065A clock. It almost
feels surreal to use a 32bit ARM CPU, even in a DIP8, for something so mundane...
--
Poul-Henning Kamp | UNIX since Zilog Zeus 3.20
phk@FreeBSD.ORG | TCP/IP since RFC 956
FreeBSD committer | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.
Am 16.01.2016 um 15:03 schrieb Bob Camp:
Unlike the world of lithography, the dicing process has not made a lot of progress.
Decades ago a 1mm x 1 mm die was about as small as you could get. From what
I can see that has not dropped by more than a factor of two in 40 years (if at all).
Yes, there’s a lot more to it than just a dicing saw. Things like bond wire attach
also figure in. It still takes a certain size bond wire to carry a practical amount
of current …
The net result could be a process that does a gate or function in < 1% of the available
area. Everything else is just empty space along for the ride (or to provide attach
points).
Already 30 years ago when I took my chip design lessons, there
were chips that were pad-bound. If they had, say, 14 Pins, there
was no point to compress everything together since the pad
locations and the bonding dictated the minimum chip size.
regards, Gerhard
On 1/16/16 10:07 AM, Poul-Henning Kamp wrote:
In message 20160116080037.13903406057@ip-64-139-1-69.sjc.megapath.net, Hal Murray writes:
kb8tq@n1k.org said:
The astonishing part of this “new world” is that a very complex chip that is
made in high volume is cheaper than a handful of less popular (but far less
complex) chips.
It would be interesting to see the die sizes.
Die size is not really an issue until they become big enough to impact overall yield.
And apropos: I just used a LPC810, to do 5MHz to 1Hz for my HP5065A clock. It almost
feels surreal to use a 32bit ARM CPU, even in a DIP8, for something so mundane...
And how many thousand lines of code (including libraries, etc. that may
have been pulled in)?
I had just this discussion yesterday at work with someone. These days,
silicon (even going into space) is much cheaper than people. Sure, you
could optimize a hand crafted little routine in assembler. Or, you could
just load up RTEMS, compile your program, link in newlib, etc., and have
it working in a day. If you've got 2 MByte of memory, nobody cares
whether you use 1kbyte or 50kbyte.
Hi,
We are drifting from the original problem (dividing 10 MHz to 1 PPS) to
general questions such as hardware vs software implementation,
obsolescence of parts, program data retention and big program sizes for
trivial tasks, all of them also interesting.
Well, returning to the main problem I just checked the original PPSDIV
program source from TVB in order to remind me the size of the code:
360 lines of assembler text code including everything even blank lines.
This accounts for 62 lines of text header with a detailed explanation of
how the program works which even include the schematic drawn in
character graphics and 302 lines of code including comments. Pruning
this code of comment and blank lines it leaves 182 lines of executable
code.
This is the original PPSDIV code made for a PIC with more I/O pins that
divides the 10 MHz input to 9 simultaneous outputs from 100 KHz to .001
Hz, all of them synchronous.
The smaller versions made for 8 pin chips has about 97 lines of
assembler code including everything. This accounts for 46 lines of text
header and 51 lines of code including comments. Pruning this code of
comment and blank lines it leaves 31 lines of executable code (well,
and a little more since I have a subroutine missing).
For personal use you can stock a couple of PICs if you are afraid of
their availability in case of a future failure.
Regards,
Ignacio
On 17/01/2016 a las 1:40, jimlux wrote:
On 1/16/16 10:07 AM, Poul-Henning Kamp wrote:
In message
20160116080037.13903406057@ip-64-139-1-69.sjc.megapath.net, Hal
Murray writes:
kb8tq@n1k.org said:
The astonishing part of this “new world” is that a very complex
chip that is
made in high volume is cheaper than a handful of less popular (but
far less
complex) chips.
It would be interesting to see the die sizes.
Die size is not really an issue until they become big enough to
impact overall yield.
And apropos: I just used a LPC810, to do 5MHz to 1Hz for my HP5065A
clock. It almost
feels surreal to use a 32bit ARM CPU, even in a DIP8, for something
so mundane...
And how many thousand lines of code (including libraries, etc. that
may have been pulled in)?
I had just this discussion yesterday at work with someone. These
days, silicon (even going into space) is much cheaper than people.
Sure, you could optimize a hand crafted little routine in assembler.
Or, you could just load up RTEMS, compile your program, link in
newlib, etc., and have it working in a day. If you've got 2 MByte of
memory, nobody cares whether you use 1kbyte or 50kbyte.
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to
https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
El software de antivirus Avast ha analizado este correo electrónico en busca de virus.
https://www.avast.com/antivirus
Hi
On Jan 16, 2016, at 7:40 PM, jimlux jimlux@earthlink.net wrote:
On 1/16/16 10:07 AM, Poul-Henning Kamp wrote:
In message 20160116080037.13903406057@ip-64-139-1-69.sjc.megapath.net, Hal Murray writes:
kb8tq@n1k.org said:
The astonishing part of this “new world” is that a very complex chip that is
made in high volume is cheaper than a handful of less popular (but far less
complex) chips.
It would be interesting to see the die sizes.
Die size is not really an issue until they become big enough to impact overall yield.
And apropos: I just used a LPC810, to do 5MHz to 1Hz for my HP5065A clock. It almost
feels surreal to use a 32bit ARM CPU, even in a DIP8, for something so mundane...
And how many thousand lines of code (including libraries, etc. that may have been pulled in)?
I had just this discussion yesterday at work with someone. These days, silicon (even going into space) is much cheaper than people. Sure, you could optimize a hand crafted little routine in assembler. Or, you could just load up RTEMS, compile your program, link in newlib, etc., and have it working in a day. If you've got 2 MByte of memory, nobody cares whether you use 1kbyte or 50kbyte.
These days, that code (thanks very much to a number of people and various market forces) is likely all free. Not just free as in I can “borrow” it from work. Free as in fully licensed for use at no cost. Not only is the code in that category, so is the IDE and all the programming and verification code that goes along with it. If you want to check it all out, the silicon guys (just about all of them) will give you a free (as in there is an obligation) board for your commercial project. They will sell you the same board for < $20 (not quite free) for your basement one-off project. Would I use those tools to send a gizmo to Pluto? Maybe not without some adult supervision. Are they used every day to do a wide range of things - yes indeed they are.
Another un-mentioned issue (so far) is that my board full of logic can easily have an un-noticed bug in it. The same is true of my code. Either way, two years down the line there is a need to do something about it. In the case of the board full of logic, it’s get out the soldering iron time (and possibly ship parts back and forth time). In the case of the code based gizmo, out goes a patch. No hardware is swapped out. No soldering irons are involved. Yes, a bit of forethought about boot loaders is needed, but that’s been the way it’s been done for at least the last 20 years. It’s also quite handy when all of a sudden (as in I just bought a new piece of gear), I need 1/2 pps or 22 1/3 Hz or some other strange output. No new PC board to lay out. Nothing to buy. Just spend a half hour shooting some new code.
At least where I have worked, we stopped doing complex stuff with random logic a long time ago. Stuff either moved to custom ASIC’s, PLD’s (CPLD’s, FPGA’s …), or to MCU’s. That started in the 80’s and pretty much was a done deal by the mid 90’s. With > 20 years of data, it’s pretty clear that the programmable approach is at least as bug free and indeed more reliable than the random logic approach.
Bob
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.