This page contains a variety of my thoughts that might be chacterized as
theories and/or dreams. They are in no particular order. Here is a link back to my professional
page that amounts to a main entry point.
I think I can begin to see
the future of the information economy in facilities like
the Web. In 1995 I published a paper related to the Web and MBone -
Distribution via Hopwise Reliable Multicast. This paper is somewhat
narrowly technical, but it illustrates some of the potential that I
see in these areas.
This economy is starting to get charged up now with the advertising
capabilities of the Web. I see the Web developing commercially
in several stages:
- Advertising - already well launched, see, for example, my
Computer and Communication Pages,
now successfully producing advertising revenue since 1995.
- Financial transactions - this technology is rapidly becoming
available from projects like
- Direct information sales - glimpses of what this will be like can be
viewed in the variety of online magazines, newspagers, journals, etc. already
beginning to show up on the Web.
Tremendous progress has been made in the above areas during the late 1990s
and 2000s. Even the dotcom bust really didn't show down the progress in
this area. It may culminate when we have immersive communication (video+)
between essentially any two points on Earth.
I designed a switching mechanism that can deal with the torrent of data at the core of
today's and tomorrow's Internet by an inovative application of existing crossbar switching
technology. This mechanism, which I call "just in time switching", schedules
the systematic throwing of the crossbar switches in advance of arriving "frames"
of data. It is a little like railroad switching. It avoids the tight and therefore
difficult time considerations of dealing with cell or frame headers before making
a switching decision by scheduling such switching in advance. It is not able to
deal with real time traffic change requirements, but is much better able to
handle "virtual circuit conditioning" of the sort that ATM technology allows
in the fact of the very large sorts of data flows that are possible with
dense wave division multiplexing.
I have developed a design for a "Data Flow"
machine. I call it a "functile" computer and have given it the name "IMPACT" - Integrated Machine for Processing
Asynchronous Cellular Transactions. It is based on programmable cells that interact in a data flow way,
that is they wait for their needed input and for needed output buffers before firing. I believe this
structure provides a much more natural architecture for parallel programs. The parallelism in such a
machine automatically includes both pipelining and traditional parallelism. I have simulated this "computer"
with an example set of
"personalities" (opcodes), and have run a variety of simple programs
on the simulation. My simulations have demonstrated
that a reasonably selected version of this architecture (2D, size within constraints of
cost and dynamic failure, etc.) will
run most usefully parallel algorithms on the order of a million times
faster than they can be run on the hottest sequential processor
of the day. This technology is waiting for a slowdown in the
"killer" micro onslaught. Anyone interested in this area is encouraged
to contact me. This technology is very straight forward, but conceptually
not obvious. I've put a PDF file on this Web site that describes the
main points of this technology. Here is that
hard copy document from 1985.
This technology that benefits by the recent moves to
asychronous logic (mostly to preserve power) as it needs asychronous logic
to work effectively. In the document you can see how the cells exchange
data (each waits in parallel for each direction until it has data to
deliver and the buffer in that direction is empty). You can also
see a loading mechanism, a link loading algorithm, a mechanism for
bypassing failed cells (fault tolerance), and some sample simulation output.
I believe, though this is very speculative,
that it is not all that difficult to
construct an artificial brain. I have a model for one that
I believe can eventually think better than human brains. This model
is very unlike computers or neural networks. It is a little like the
preceptron models of the 1960s, but it is uses quantum mechanical
uncertainty in a critical way. I have also done some simple simulation
studies of this model where I demonstrated operant and classical
conditioning. However, this work (unlike the
functile computer work above) was done long enough ago that
the software is now useless and would have to be rewritten to
get it going again. In my opinion, many of the recent developments in the
theory of consciousness reinforce the concepts behind this model
of the brain. The basic idea of this model is that quantum mechanical
randomness (true randomness) at a microscopic level in the brain
is manefest at a macroscopic level as the driving force behind
"discovery." That randomness generates the possibilities that the
brain selects from according to its various pleasures and pains.
I believe the difficulty people are having of imagining how brains
along these lines work is the same reason people have difficulty
imagining how biological evolution works. The process demands
much larger numbers (in this case numerical neurons selecting
from random processes) that people are used to dealing with.
I also believe (again fairly speculatively) that
animals (e.g. humans) can be "reengineered" so that they will live
essentially forever. This belief is founded mostly on the
simple observations that:
- Miotic animals (bacteria) can reproduce
forever (i.e. they don't "age"),
- Mitosis "resets" the aging "clock", and
- Mitotic animals have varying lifespans.
If aging was some sort
of free radical deterioration of genetic or other cell structures, it
would be basically the same aging process regardless of species. This
is not so. Why do dogs age and die sooner than humans who in turn
die sooner than some tortises? It is because a clock in the dogs body
that had said "switch from infant to juvenile hemoglobin" or
"new teeth" or "become sexually active" or ... ran out of
things to do and basically said "time to die." This dying mechanism
is very helpful from an evolutionary viewpoint, but for humans who are
largely evolving socially, it is counterproductive. We needn't put up
with it any more. While there are all sorts of ways people try to ease
the pain of aging and death, very few people really want to get old and
die. We needn't continue doing it. A concerted research effort will
enable us to loop or essentially stop that aging clock. We may well
age and die by some other means, but it will be a much longer and different
This sort of thinking has recently begun to be referred to as
Negligible Senescence and popularized by people like
Aubrey de Grey. This may seem like just so much "fountain of youth"
hooplah, but I believe it is not. It is not likely to allow people to live
"forever", since (even besides accidents and such) there are still aspects
of people that do indeed "wear" out (the eyes lenses, some joints, etc.).
While these could also be "regrown" by resetting the clock far enough,
that might well reset the brain as well - creating essentially a new person
(e.g. a clone). Very much as children remember little of their very
young years (because their brains are changing and growing so), a person
reset to such an age would become essentially a new person. Still,
many (including me) believe that human life can be greatly extended and
made much healthier by holding the "clock" in check.
I believe Mathematics should be put on
a sound logical basis by having computers maintain a database
of proven theorems and verify new theorems (note, I don't suggest
that computers prove the theorms - that is for the brains). Such
a system would eliminate situations like the current ambiguity over the
proof of Fermat's "last theorem." It would also provide more confidence
for very complex proofs (e.g. like that for the four color problem) that
can only be verified by very few people and/or with the aid of computers.
I've been embarrassed for many years by the pathetic "security/integrity"
properties of commercial computer systems. I believe these problems result
largely from the lack of Principle Of Least Privilege (recently called
Principle Of Least Authority) protection between domains. Many people
(noteably Butler Lampson) believe that such protection is inherently
difficult to achieve as somebody must painstakingly think out and
confugure access control for all actors in a system. This view is
false as simply looking at object oriented programming shows. The
local decisions are very easy to make by programmers as they know
exactly what they are trying to accomplich and what parameters are
needed. By using this "capability" model of computing I believe
we can achieve highly secure and high integrity computing systems.
One aspect of "capability" computing that many people fear is the
apparent "loss of control" with such systems in that once a reference
to an object has been delegated to one subject, that subject can
delegate it to another. This free delegation, while so important
to POLA computing, strikes many people as too lose as it goes beyond
what is often done with people - where we may want to limit access
or even perhaps change access in the future.
I recently co-authored a paper that addresses this topic,
I believe this approach can provide the POLA value of
capabilities while at the same time providing the logging,
auditing, and administrative management available in Access
List Control based systems. I hope to soon see such a
mechanism available on the Web (e.g. WebKeys) where it is
vital to solve problems like the mash-up problem and generally
deal with the horrific mess that is access control on the
I take care in monitoring how solid my understanding is in
any area. It is important to me to know when my understanding is very loose
(e.g. the brain model or aging theory noted above), very solid
(e.g. the Cellular Tiled computer which I have simulated),
and strictly logical (e.g. when I have a proof in some Mathematical
domain or when I know something will work on a computer).