Eli,

> Government stuff? Don't you have to document the algorithm that your

using
> and used an approved algorithm?


We do have some government accounts. Depending on the department or agency
that is using the technology, depends on whether we may have to disclose.
However I can say that in one case where we had to disclose, we changed the
system for them so it was customized. This allowed us to disclose the
technology they would be using to them as required but allowed us to keep
the original protected so we did not have to disclose our proprietary stuff.
There are times where we are also requested to take out our stuff and use
their approved stuff. Again these become 'custom' works that do not go out
to the public. And yes in some cases, you do end up having to get approval
before they will use a product or technology. Twice we have had to try
again because what we had did not meet the standards required. And in one
case the changes they wanted, including certain disclosure requirements the
client had, forced us to decline having them as a customer.

> So shouldn't the code be signed, not encrypted since you should still be
> able to see the code, just not change it.


That is another capability we have but is not yet implemented. We have not
decided on whether to allow this functionality because the actual purpose
behind the encryption was to not only protect but hide. If the source is
only locked but still visible, it would still be a simple matter of screen
printing or writing down what you see and thus you have a copy or have
access to source that you are not supposed to see.

> I assume you are talking about your stage 2 implamentation, right? Unless
> I'm wrong about VB there is a problem with your method. In order to go

from
> source code into compiled code you need to first parse the code, create IL
> tokens and then create an executable file based on that. Your program

would
> need to decrypt the code into plain text inorder for VB to compile it. So
> at somepoint the plain text has to be in the system.


Not sure how to answer this because to describe how this works would give
away what we are doing. I will have to think about this one before I
answer. What we are doing is not easy and has taken a lot of work to
accomplish and as I mentioned, it is not complete yet. We do have a
working prototype now but I am not sure how to describe the process to
you... let me think on this a bit.

> Also this means that the key to decrypt the code is also stored in the
> system since it needs to be able to decrypt the code in order to compile

it.

The current 'stage 1' implementation does store the key in a manner of
speaking with the project. For stage one, this current release and because
we have not yet completed all the legal necessities, has a only a very basic
capability in regard to what we are discussing. Although this capability
in the product that is out now for release works excellent, it is only a
very small part of the entire technology system we have been developing.
The next variation does not. Again, there is no way to describe this in
detail because it would definitely give away what we have invented. I will
say this. If you took a project that had no key yet then took an identical
copy of that same project that DID have a key and did a bit by bit
comparison, you would find they are identical in every aspect.... 100%.
The trick is how we did this and THAT is the heart of the technology. I
will also state that the key is NOT stored anywhere else either. In fact
because of this, we are also working on a new software protection system
that uses parts of this technology... and if successful, we will be
releasing the software protection engine to the developer community.... and
we are contemplating releasing it for free.

> Also this means that the key to decrypt the code is also stored in the
> system since it needs to be able to decrypt the code in order to compile

it.
>
> So with the key and plaintext stored in memory it seems like you are
> depending on security through obscurity which is not secure.


First, I never said this is what we are doing. You are making an
assumption. Second do you really think we would not have thought of the
fact that anyone can read memory, files, or any other part of your system -
even the bios? Sorry but your cold..not even warm. ;-)

> So are you creating your own interpreter to interprete the source code at
> runtime? Or are you decrypting the source code, compiling it, and then
> reencrypting the executable file?


Let me clarify... in the existing and released engine out now in AgMapthat
PB+, we are not doing anything at all except encrypting and unencrypting
(and location tracking) the source code. You have full control over
encrypting and unencrypting. If you want the code to execute or compile,
you have to unencrypt it in the current engine. But we are working on
the current stage where you will not need to do this.

Now in development we have a much more powerful system. Right now because
this is in very prototype stages and because we are testing the integrity of
the data, we do unencrypt before compile and check each bit. This however
is a built in debugging option because in full mode, it is not necessary to
unencrypt the source before you send it to be compiled. I am not at
liberty to describe what happens when you send it to be compiled but I will
tell you that we have two methods we are looking at. The first is a
intermediate interpreter. The second is not and uses the actual data but in
such a fashion that it would make no sense to anyone that tried to look at
it as it entered the compile stages.. This second part is giving us some
problems right now when we are passing certain data. But we have confidence
we will resolve it.

> What type of file does it compress? Normal text files are different from
> binary executables, and source code is probably more compresible then

normal
> text.


Oh sorry I did not realize I left out the file type in my post.. That 60
meg file is a .bmp picture file created from a machine that takes cat scan
type images of the human eye. I do not remember what that machine is called
but you should see the images... wow! BTW, for reference, our first run
beat zip by 55% - and in fact it was done completely by accident! And we
have proven a 100% lossless value and this has been confirmed by an outside
third party. We did not even realize what we had done until the client who
had contracted us for some special work, called us about 5 minutes after we
had sent them a prototype of a software program we were writing for them.
They told us to take a look at our code and the numbers... Let me tell you,
we were in shock... all by accident. They had us immediately stop the
project, take care of some paperwork ;-) and then do some specific testing
with them as the outside ... umm....control. After we played around for
about two weeks, we all realized what we had done and, well, we have funding
through next year to continue work on this technology with no strings at
all.

The compression engine works with all files types and converts them to our
own designed file type, extension and structure we have called .agf -
(Agendum File format) Currently we have tested most common file types like
..bmp, .jpg, .txt, .doc, .sys, .dll, .exe and so on etc etc... We are
currently concentrating on .jpg, .bmp, .mpeg, .mp3, .divx, .mpg, .tiff, and
..wav extensions. I should also mention that that 60 meg image processes
in about 3 seconds. We are currently working on video as well and we do
have a working 'streamer' that is pretty awesome. We can reduce a 20 meg
video streaming although we are currently having trouble because the
hardware is not fast enough. We are currently talking to a 3D video card
company about developing this part of the technology around a hardware
technology. We have a prototype coming in two weeks and this puppy runs
about 200% faster than any 3D video card currently on the market. We can't
wait to see if we can use that hardware with our code.. if so.. oh man.

One problem we are working on now is how to handle already compressed files.
For example you can take .jpg or .jpeg file and save it out of 20 different
paint packages. You will get 20 different file sizes AND file quality
outputs. All are usually within a few bytes of each other. What is weird
however is that depending on the program you saved that image from, we seem
to achieve differing results and have not yet resolved this with out math.

For example, if I take a 650kb .jpg file saved out of two different paint
packages and run it through our engine, one file will come out at about
15.5kb and the other will come out at about 48.6 kb. We are turning grey
on this one... Both are within the acceptable limits but the discrepancy is
so big that it is not acceptable for us. We want that margin down to 10kb
or less max. During development, we have also found that we can strip
data from an image, restore the image to original size AND increase it in
resolution. At 200% we are at a lossless value of about 2.3%, 300% about
18%, 400% about 40% and above that we go over 50% loss. The data is all
there but increasing res is loosing the integrity of the data's quality the
higher we go. We are not really concentrating on this right now but our
goal is to reach 500% resolution increase at less than 25% lossless.


Anyhow, that is the basics. I do want to stress... this is in development.
NONE of it is available nor will it be for a while. Who knows, we may not
be able to resolve some of the issues we are dealing with and this may all
fall flat on its face. But we are giving it a try and so far, things are
looking pretty good. To be perfectly honest, this is the first time we
have really spoken in this much detail publicly. Primarily because it is
still in development and is not ready for release. If we succeed, cool!
If we do not, well at least we gave it a try. But I can tell you this,
there is a whole lot of interest out there and we have shown this technology
to some select companies and people and based on the reaction, we will have
whatever support we need to take this as far as we possibly can. ****, I
may even be posting a message in OffRamp this coming new year for people
that may want to have a shot at working on this stuff.. There are five of
us working on this engine right now but if all goes well, very shortly we
will be looking for a few others that "work well with others". ;-)

Everyone knows we do VB ActiveX and development tools. But VB is our 'fun'.
There is much more going on here than VB development. To give you an idea
of some of the 'fun' we have had, have you ever heard of the BattleTech
center that was built in downtown Chicago in the early 90's? Well that 16
cockpit battletech simulator was one of the very first technologies to use
3D graphics. When Doom came out it was pretty much the first on PC type
computers.... or was it? Actually it was about 3 years late because FASA
corp had open the Battletech center and inside each single person cockpit,
the heart of the system, was actually an Amiga computer - and the Amiga
computer had a special piece of hardware it needed to make it do it magic.
This piece of hardware was known very well in the days of the Amiga. It was
called the Sapphire 020 Accelerator.... and myself and two of my partners
are the guys that designed that board. We were one of the companies (we
were not called Agendum back then) that was on the team that worked on that
first 3D video simulator system called Battletech.

Anyhow, we have done quite a bit..... and as long as we keep having fun,
there will be a lot more to come.....

sorry for such a long post.. once I got going...well..... ;-)
--
Take it light!!

Todd B

"Eli Allen" <eallen@bcpl.net> wrote in message
news:3a4bc03b@news.devx.com...
> Replies below (cross posted for the same reasons as my other post)
>
> "Todd B" <ToddB@NOSPAMAgendumSoftware.com> wrote in message
> news:<3a4b1d5a$1@news.devx.com>...
> > One of the biggest advantages at this time is the ability to protect

> source