473,511 Members | 9,908 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

Python obfuscation

Are there any commercial, or otherwise obfuscators for python source
code or byte code and what are their relative advantages or
disadvantages. I wonder because there are some byte code protection
available for java and .NET, although from what i've read these seem to
be not comprehensive as protection schemes



http://petantik.blogsome.com - Telling it like it is

Nov 9 '05
159 13273
>The legality of copying, modifying and redistributing works should be
reformed until it matches a 6th grader's intuitions about sharing.


A 6th grader also has intuitions regarding the ownership of an idea.
"It was MY idea!!!" "No, it's NOT!!!" "Is TOO!!!"

The Eternal Squire

Nov 22 '05 #101
>Utter poppycock. Who is to say that a particular entity holds an
exclusive "sales opportunity" to a particular individual? Are we to
protect the expectations of profit for some, at the expense of sharing
things with each other?
Utter horse manure. Anyone can profit from something so long as it
is thier own idea.
Ourselves and our children are lost generations with respect to
ethics, manners,

Ethics such as sharing, and helping one's neighbour?
Giving away an illegal copy of software it not helping one's neighbor,
it is making that neighbor an accessory to copyright infringement,
a federal offense punishable not in excess of 10 years of $10K.
Such a nieghbor should ask: "with friends like that, who needs
enemies?"
I certainly hope our grandchildren will live in an environment that
encourages helping each other, yes.


Helping each other cheat on a test is not helping, it is hurting.
There
is no difference ethically between plagiarism, cheating, or
unauthorized
copying.

Nov 22 '05 #102
>Utter poppycock. Who is to say that a particular entity holds an
exclusive "sales opportunity" to a particular individual? Are we to
protect the expectations of profit for some, at the expense of sharing
things with each other?
Utter horse manure. Anyone can profit from something so long as it
is thier own idea.
Ourselves and our children are lost generations with respect to
ethics, manners,

Ethics such as sharing, and helping one's neighbour?
Giving away an illegal copy of software it not helping one's neighbor,
it is making that neighbor an accessory to copyright infringement,
a federal offense punishable not in excess of 10 years of $10K.
Such a nieghbor should ask: "with friends like that, who needs
enemies?"
I certainly hope our grandchildren will live in an environment that
encourages helping each other, yes.


Helping each other cheat on a test is not helping, it is hurting.
There
is no difference ethically between plagiarism, cheating, or
unauthorized
copying.

Nov 22 '05 #103
On Wed, 16 Nov 2005 13:25:50 -0800, The Eternal Squire wrote:
The teaching of legality and ethics of incorporating
other peoples' works into one's own should begin at 6th grade and be
repeated every year until the message is driven home.
I think you have that completely backwards.

Sixth graders have an intuitive understanding of the economics and
morality of using "things" that adults these days rarely have.

Material things, objects, are scarce resources and cannot be taken with
impunity. If I take your black crayon, then you have one less black crayon.

Non-material things, ideas, are not scarce resources. If I take your idea
of writing programs in a high-level language like Python instead of using
machine code, you still have the idea and we are both better off.
The concept of intellectual property (patent, copyright, trade secret)
is an extension into the business world of issues regarding the proper
usage of ideas (e.g. scientific principles) as treated in high school
and college.
Nonsense. Patents, copyrights and trade secrets are completely and utterly
opposed to proper scientific principles. Alchemists and magicians tried to
monopolise their knowledge. Scientists share. The proliferation of patents
in the medical industry is *hurting*, not helping, medical research:
scientists are reluctant to publish knowledge, or are prohibited by their
employer, and the cost of doing basic research is sky-rocketing due to the
need to pay licence fees.

This is especially obscene when one realises that in the US 80% of the
scientific research that gets patented by private companies is paid for by
tax payer dollars. Your taxes pay for the science which then gets given on
a silver platter to some private company who collects monopoly rents on
that knowledge for 20 years. It is a nice scam if you can get away with
it, and the pharmaceutical companies have got away with it.

Do developers, when writing code consider how protected their
code will be when considering what language they will write it in
i.e ease of use, speed of language, maintainability and
'obfuscatability' ?


Typically not due to a few well-known principles: 1) Essentially an
optimized (not debug!) compilation from source code to machine language
is nearly as good as encryption for hindering reverse engineering of
the distributed code,


That is utterly wrong. Reverse engineering of even optimized code is
relatively easy. That is one of the big myths that plague the IT industry:
"if I don't release the source code, nobody will be able to work out how
my code works".

It just doesn't work that way. Just ask the people working on the WINE
project, who have a working, almost complete, bug-for-bug compatible
reverse-engineered Windows emulator, and they've done it in their spare
time.

Or ask the virus writers, who often find bugs and buffer over-flows and
other security holes in software before the guys with the source code find
them.

Reverse engineering object code is harder than reading source, but it is
still not a barrier to anyone serious about working out how your code
works.

[snip] The greatest theft of sales opportunities
Listen to yourself. "The greatest theft of SALES OPPORTUNITIES". What is
that supposed to mean? Not theft of goods, not even theft of ideas, but
the theft of an opportunity to make a sale?

"I might have been able to sell to that person, but now I can't, it's YOUR
FAULT... I'm going to sue!!!"

The greatest "theft" of sales opportunities is COMPETITION, not copying.
If every food store and restaurant in the country shut down except
McDonalds, then they would have all the sales opportunities anyone would
ever want. Every store that competes with them is "stealing" the
opportunity to make a sale.

We've already seen this way of thinking. Listen to Jamie Kellner, chairman
and CEO of Turner Broadcasting System:

"Any time you skip a commercial you're actually stealing the programming."

Listen to the arrogance: "I guess there's a certain amount of tolerance
for going to the bathroom." We need a permission slip from the television
stations to go to the toilet? Heaven forbid we turn the machine off,
that's theft of sales opportunities.

Perhaps somebody should remind these folks, we're not the customer. We're
the product they are selling: they sell our eyeballs to advertisers, who
give them money for the opportunity to be seen by us. If we choose to
skip the commercials, that's just too bad for Jamie Kellner's business
model.
Ourselves
and our children are lost generations with respect to ethics, manners,
and respect for authority, perhaps we can train our grandchildren to
behave more proprely.


There is too much respect for so-called "authority", not too little.
Respect for authority is just another way of saying "Don't think for
yourself, do as you're told."
--
Steven.

Nov 22 '05 #104
On Wed, 16 Nov 2005 13:25:50 -0800, The Eternal Squire wrote:
The teaching of legality and ethics of incorporating
other peoples' works into one's own should begin at 6th grade and be
repeated every year until the message is driven home.
I think you have that completely backwards.

Sixth graders have an intuitive understanding of the economics and
morality of using "things" that adults these days rarely have.

Material things, objects, are scarce resources and cannot be taken with
impunity. If I take your black crayon, then you have one less black crayon.

Non-material things, ideas, are not scarce resources. If I take your idea
of writing programs in a high-level language like Python instead of using
machine code, you still have the idea and we are both better off.
The concept of intellectual property (patent, copyright, trade secret)
is an extension into the business world of issues regarding the proper
usage of ideas (e.g. scientific principles) as treated in high school
and college.
Nonsense. Patents, copyrights and trade secrets are completely and utterly
opposed to proper scientific principles. Alchemists and magicians tried to
monopolise their knowledge. Scientists share. The proliferation of patents
in the medical industry is *hurting*, not helping, medical research:
scientists are reluctant to publish knowledge, or are prohibited by their
employer, and the cost of doing basic research is sky-rocketing due to the
need to pay licence fees.

This is especially obscene when one realises that in the US 80% of the
scientific research that gets patented by private companies is paid for by
tax payer dollars. Your taxes pay for the science which then gets given on
a silver platter to some private company who collects monopoly rents on
that knowledge for 20 years. It is a nice scam if you can get away with
it, and the pharmaceutical companies have got away with it.

Do developers, when writing code consider how protected their
code will be when considering what language they will write it in
i.e ease of use, speed of language, maintainability and
'obfuscatability' ?


Typically not due to a few well-known principles: 1) Essentially an
optimized (not debug!) compilation from source code to machine language
is nearly as good as encryption for hindering reverse engineering of
the distributed code,


That is utterly wrong. Reverse engineering of even optimized code is
relatively easy. That is one of the big myths that plague the IT industry:
"if I don't release the source code, nobody will be able to work out how
my code works".

It just doesn't work that way. Just ask the people working on the WINE
project, who have a working, almost complete, bug-for-bug compatible
reverse-engineered Windows emulator, and they've done it in their spare
time.

Or ask the virus writers, who often find bugs and buffer over-flows and
other security holes in software before the guys with the source code find
them.

Reverse engineering object code is harder than reading source, but it is
still not a barrier to anyone serious about working out how your code
works.

[snip] The greatest theft of sales opportunities
Listen to yourself. "The greatest theft of SALES OPPORTUNITIES". What is
that supposed to mean? Not theft of goods, not even theft of ideas, but
the theft of an opportunity to make a sale?

"I might have been able to sell to that person, but now I can't, it's YOUR
FAULT... I'm going to sue!!!"

The greatest "theft" of sales opportunities is COMPETITION, not copying.
If every food store and restaurant in the country shut down except
McDonalds, then they would have all the sales opportunities anyone would
ever want. Every store that competes with them is "stealing" the
opportunity to make a sale.

We've already seen this way of thinking. Listen to Jamie Kellner, chairman
and CEO of Turner Broadcasting System:

"Any time you skip a commercial you're actually stealing the programming."

Listen to the arrogance: "I guess there's a certain amount of tolerance
for going to the bathroom." We need a permission slip from the television
stations to go to the toilet? Heaven forbid we turn the machine off,
that's theft of sales opportunities.

Perhaps somebody should remind these folks, we're not the customer. We're
the product they are selling: they sell our eyeballs to advertisers, who
give them money for the opportunity to be seen by us. If we choose to
skip the commercials, that's just too bad for Jamie Kellner's business
model.
Ourselves
and our children are lost generations with respect to ethics, manners,
and respect for authority, perhaps we can train our grandchildren to
behave more proprely.


There is too much respect for so-called "authority", not too little.
Respect for authority is just another way of saying "Don't think for
yourself, do as you're told."
--
Steven.

Nov 22 '05 #105
On Wed, 16 Nov 2005 14:00:16 -0800, The Eternal Squire wrote:
The legality of copying, modifying and redistributing works should be
reformed until it matches a 6th grader's intuitions about sharing.


A 6th grader also has intuitions regarding the ownership of an idea.
"It was MY idea!!!" "No, it's NOT!!!" "Is TOO!!!"

That's what happens when you try to teach 6th graders about intellectual
property: they revert back to two year old mentality.
--
Steven.

Nov 22 '05 #106
On Wed, 16 Nov 2005 14:00:16 -0800, The Eternal Squire wrote:
The legality of copying, modifying and redistributing works should be
reformed until it matches a 6th grader's intuitions about sharing.


A 6th grader also has intuitions regarding the ownership of an idea.
"It was MY idea!!!" "No, it's NOT!!!" "Is TOO!!!"

That's what happens when you try to teach 6th graders about intellectual
property: they revert back to two year old mentality.
--
Steven.

Nov 22 '05 #107
The Eternal Squire <et***********@comcast.net> wrote:
The legality of copying, modifying and redistributing works should be
reformed until it matches a 6th grader's intuitions about sharing.


A 6th grader also has intuitions regarding the ownership of an idea.
"It was MY idea!!!" "No, it's NOT!!!" "Is TOO!!!"


And what should we teach those children?

"Now children, it can be an idea you *both* have, and you both get the
benefit. Learn to share."

Or, do we instead teach them:

"Excellent children! Keep on fighting over who owns ideas, and never
share them. That's the sort of society we want you to live in."

The more you try to teach them to stop sharing, the more we'll teach
them to share. Keep your propaganda about "sharing == evil" away from
children.

--
\ "Those are my principles. If you don't like them I have |
`\ others." -- Groucho Marx |
_o__) |
Ben Finney
Nov 22 '05 #108
The Eternal Squire <et***********@comcast.net> wrote:
The legality of copying, modifying and redistributing works should be
reformed until it matches a 6th grader's intuitions about sharing.


A 6th grader also has intuitions regarding the ownership of an idea.
"It was MY idea!!!" "No, it's NOT!!!" "Is TOO!!!"


And what should we teach those children?

"Now children, it can be an idea you *both* have, and you both get the
benefit. Learn to share."

Or, do we instead teach them:

"Excellent children! Keep on fighting over who owns ideas, and never
share them. That's the sort of society we want you to live in."

The more you try to teach them to stop sharing, the more we'll teach
them to share. Keep your propaganda about "sharing == evil" away from
children.

--
\ "Those are my principles. If you don't like them I have |
`\ others." -- Groucho Marx |
_o__) |
Ben Finney
Nov 22 '05 #109
The Eternal Squire <et***********@comcast.net> wrote:
Ben Finney wrote:
Ethics such as sharing, and helping one's neighbour?


Giving away an illegal copy of software it not helping one's
neighbor, it is making that neighbor an accessory to copyright
infringement, a federal offense punishable not in excess of 10 years
of $10K.


So the law is the guide to ethical behaviour? Sure you don't have that
reversed?

--
\ "Unix is an operating system, OS/2 is half an operating system, |
`\ Windows is a shell, and DOS is a boot partition virus." -- |
_o__) Peter H. Coffin |
Ben Finney
Nov 22 '05 #110
The Eternal Squire <et***********@comcast.net> wrote:
Ben Finney wrote:
Ethics such as sharing, and helping one's neighbour?


Giving away an illegal copy of software it not helping one's
neighbor, it is making that neighbor an accessory to copyright
infringement, a federal offense punishable not in excess of 10 years
of $10K.


So the law is the guide to ethical behaviour? Sure you don't have that
reversed?

--
\ "Unix is an operating system, OS/2 is half an operating system, |
`\ Windows is a shell, and DOS is a boot partition virus." -- |
_o__) Peter H. Coffin |
Ben Finney
Nov 22 '05 #111
"The Eternal Squire" <et***********@comcast.net> writes:
A fair request. The teaching of legality and ethics of incorporating
other peoples' works into one's own should begin at 6th grade and be
repeated every year until the message is driven home.


Right. You want to teach potential programmers that they should make a
checking the standard libraries and network library repositories for
libraries that can, if incorporated into their work, would make
finishing the product at hand that much easire. You wannt to teadh
them that this is not long legal and ethical, but smart.

I assume that other professions have similar tools/etc. available.

<mike
--
Mike Meyer <mw*@mired.org> http://www.mired.org/home/mwm/
Independent WWW/Perforce/FreeBSD/Unix consultant, email for more information.
Nov 22 '05 #112
"The Eternal Squire" <et***********@comcast.net> writes:
A fair request. The teaching of legality and ethics of incorporating
other peoples' works into one's own should begin at 6th grade and be
repeated every year until the message is driven home.


Right. You want to teach potential programmers that they should make a
checking the standard libraries and network library repositories for
libraries that can, if incorporated into their work, would make
finishing the product at hand that much easire. You wannt to teadh
them that this is not long legal and ethical, but smart.

I assume that other professions have similar tools/etc. available.

<mike
--
Mike Meyer <mw*@mired.org> http://www.mired.org/home/mwm/
Independent WWW/Perforce/FreeBSD/Unix consultant, email for more information.
Nov 22 '05 #113
Plenty of case law exists behind judgements made to repair loss of
sales opportunites... these are called infringement upon sales
territories..

Nov 22 '05 #114
Plenty of case law exists behind judgements made to repair loss of
sales opportunites... these are called infringement upon sales
territories..

Nov 22 '05 #115
Steven D'Aprano <st***@REMOVETHIScyber.com.au> wrote:
On Wed, 16 Nov 2005 13:51:35 +0000, Ed Jensen wrote:
Steven D'Aprano <st***@removemecyber.com.au> wrote:
I'm not sure if that is meant to be a rhetorical
question or not, but something of the order of 95% of
all software written is never distributed to others,
and so copyright or the lack of copyright is not an issue.


Can you cite your source(s) for this information?


Not easily, but I will try.

If it helps, I will clarify what I was talking about -- in hindsight it is
a little unclear. Most software written (I think about 95%) is by
companies for in-house use only. Since it never gets distributed outside
of the company using it, copyright is of little additional value.


Hmmm, I thought the original "95%" (which I think I remember from
something ESR wrote, but can't pin down what) applied to a wider
category matching your former description: 95% of all software written
is never distributed to others, _either_ because it was never meant to
be, _or_ because the development project failed disastrously (after some
code got written but before it got deployed, i.e., distributed), as so
many projects in the SW industry do (at various stages of the
development process).
Alex
Nov 22 '05 #116
Steven D'Aprano <st***@REMOVETHIScyber.com.au> wrote:
On Wed, 16 Nov 2005 13:51:35 +0000, Ed Jensen wrote:
Steven D'Aprano <st***@removemecyber.com.au> wrote:
I'm not sure if that is meant to be a rhetorical
question or not, but something of the order of 95% of
all software written is never distributed to others,
and so copyright or the lack of copyright is not an issue.


Can you cite your source(s) for this information?


Not easily, but I will try.

If it helps, I will clarify what I was talking about -- in hindsight it is
a little unclear. Most software written (I think about 95%) is by
companies for in-house use only. Since it never gets distributed outside
of the company using it, copyright is of little additional value.


Hmmm, I thought the original "95%" (which I think I remember from
something ESR wrote, but can't pin down what) applied to a wider
category matching your former description: 95% of all software written
is never distributed to others, _either_ because it was never meant to
be, _or_ because the development project failed disastrously (after some
code got written but before it got deployed, i.e., distributed), as so
many projects in the SW industry do (at various stages of the
development process).
Alex
Nov 22 '05 #117
Mike Meyer <mw*@mired.org> wrote:
Alex's solution doesn't require special treatment for disaster
recovery and/or planning, and as such is a valid answer to the


I'm not sure I understand this. I would assume that any software (or,
for that matter, data) of any substantial importance, worthy of being
deployed on a server, does include disaster planning (and recovery
plans, in particular) as a routine part of server-side deployment
(regular backups with copies off-site, etc etc).

Of course, server side deployment DOES make it considerably easier to
enjoy such crucial services, compared with client side deployment; but I
had not addressed such issues at all in my original posts on this thread
(in terms of "IP protection", a "fat client" with data kept client-side
might be just as suitable as a "thin client" for server-side
deployment... but it would be just as vulnerable as a wholly client-side
deployment to issues of [lack of] disaster planning etc).

So, I may perhaps be misunderstanding what you're saying about "my
solution"...?
Alex
Nov 22 '05 #118
Mike Meyer <mw*@mired.org> wrote:
Alex's solution doesn't require special treatment for disaster
recovery and/or planning, and as such is a valid answer to the


I'm not sure I understand this. I would assume that any software (or,
for that matter, data) of any substantial importance, worthy of being
deployed on a server, does include disaster planning (and recovery
plans, in particular) as a routine part of server-side deployment
(regular backups with copies off-site, etc etc).

Of course, server side deployment DOES make it considerably easier to
enjoy such crucial services, compared with client side deployment; but I
had not addressed such issues at all in my original posts on this thread
(in terms of "IP protection", a "fat client" with data kept client-side
might be just as suitable as a "thin client" for server-side
deployment... but it would be just as vulnerable as a wholly client-side
deployment to issues of [lack of] disaster planning etc).

So, I may perhaps be misunderstanding what you're saying about "my
solution"...?
Alex
Nov 22 '05 #119
Standard libraries are usually paid for by the implementor of the
language, so this is not an issue.

Nov 22 '05 #120
Standard libraries are usually paid for by the implementor of the
language, so this is not an issue.

Nov 22 '05 #121
al***@mail.comcast.net (Alex Martelli) writes:
Mike Meyer <mw*@mired.org> wrote:
Alex's solution doesn't require special treatment for disaster
recovery and/or planning, and as such is a valid answer to the I'm not sure I understand this. I would assume that any software (or,
for that matter, data) of any substantial importance, worthy of being
deployed on a server, does include disaster planning (and recovery
plans, in particular) as a routine part of server-side deployment
(regular backups with copies off-site, etc etc).


To recap, I asked the question "how do provide software that is
protected but doesn't require special treatment in disaster recovery
and preparedness planning?" I didn't raise the issue of server
deployment in light of this, but had earlier pointed it out as a good
solution to the general issue of copy protection. This resulted in my
beinng asked if I prefered your solution to an alternative that
involved local storage.

Anything on your server doesn't require any special treatment in my
planning. I might want to check what you promise to provide and how
well you live up to those promises as part of evaluating your service,
but that's a different issue. So "Put the software on a server and let
them run it there" is a valid answer to my question.
So, I may perhaps be misunderstanding what you're saying about "my
solution"...?


I hope I clarified what I meant.

<mike
--
Mike Meyer <mw*@mired.org> http://www.mired.org/home/mwm/
Independent WWW/Perforce/FreeBSD/Unix consultant, email for more information.
Nov 22 '05 #122
al***@mail.comcast.net (Alex Martelli) writes:
Mike Meyer <mw*@mired.org> wrote:
Alex's solution doesn't require special treatment for disaster
recovery and/or planning, and as such is a valid answer to the I'm not sure I understand this. I would assume that any software (or,
for that matter, data) of any substantial importance, worthy of being
deployed on a server, does include disaster planning (and recovery
plans, in particular) as a routine part of server-side deployment
(regular backups with copies off-site, etc etc).


To recap, I asked the question "how do provide software that is
protected but doesn't require special treatment in disaster recovery
and preparedness planning?" I didn't raise the issue of server
deployment in light of this, but had earlier pointed it out as a good
solution to the general issue of copy protection. This resulted in my
beinng asked if I prefered your solution to an alternative that
involved local storage.

Anything on your server doesn't require any special treatment in my
planning. I might want to check what you promise to provide and how
well you live up to those promises as part of evaluating your service,
but that's a different issue. So "Put the software on a server and let
them run it there" is a valid answer to my question.
So, I may perhaps be misunderstanding what you're saying about "my
solution"...?


I hope I clarified what I meant.

<mike
--
Mike Meyer <mw*@mired.org> http://www.mired.org/home/mwm/
Independent WWW/Perforce/FreeBSD/Unix consultant, email for more information.
Nov 22 '05 #123
The Eternal Squire wrote:
Plenty of case law exists behind judgements made to repair loss of
sales opportunites... these are called infringement upon sales
territories..


Is that supposed to impress me? There are plenty of
lousy laws on the books being enforced.

Oh yeah, that's right, they are the Authorities, and
the Authorities can do no wrong.

--
Steven.

Nov 22 '05 #124
The Eternal Squire wrote:
Plenty of case law exists behind judgements made to repair loss of
sales opportunites... these are called infringement upon sales
territories..


Is that supposed to impress me? There are plenty of
lousy laws on the books being enforced.

Oh yeah, that's right, they are the Authorities, and
the Authorities can do no wrong.

--
Steven.

Nov 22 '05 #125
Mike Meyer wrote:
Are you claiming therefore that it's more acceptable to you to have to
access the data remotely every time you use the software than once per
install?


Alex's solution doesn't require special treatment for disaster
recovery and/or planning, and as such is a valid answer to the
question. It may be unacceptable for *other* reasons, but it beats
dictating a disaster recovery plan for your software to the end user
hands down on that basis.


Sorry, I just don't see this as being a significant difference that
makes 'access-always' acceptable and 'access rarely' unacceptable.
No, I am just pointing out that you are mixing up the concept of an
actual 'right' such as one embodied in a state's constitution, with an
implied 'right' that is just an exemption from committing an offence.
The term 'right' does not even appear in the relevant part of US
copyright law, except to state that it is a limitation on the copyright
holder's rights.


You're still just playing semantic games. The common usage is "fair
use rights." If you mean "... without infringing on the end users
rights, except for fair use rights", then you should say that.


Call it what you like; still, I cannot be infringing on your right when
such a right does not exist to be infringed. If you want to term it a
'right', feel free, but that's not what you're granted under US law or
the Berne Convention. The 'common usage' here leads to a
misinterpretation of what you're entitled to. What is actually stated
is a limitation on the copyright holder's exclusive rights, which is a
very different matter.

--
Ben Sizer

Nov 22 '05 #126
Mike Meyer wrote:
Are you claiming therefore that it's more acceptable to you to have to
access the data remotely every time you use the software than once per
install?


Alex's solution doesn't require special treatment for disaster
recovery and/or planning, and as such is a valid answer to the
question. It may be unacceptable for *other* reasons, but it beats
dictating a disaster recovery plan for your software to the end user
hands down on that basis.


Sorry, I just don't see this as being a significant difference that
makes 'access-always' acceptable and 'access rarely' unacceptable.
No, I am just pointing out that you are mixing up the concept of an
actual 'right' such as one embodied in a state's constitution, with an
implied 'right' that is just an exemption from committing an offence.
The term 'right' does not even appear in the relevant part of US
copyright law, except to state that it is a limitation on the copyright
holder's rights.


You're still just playing semantic games. The common usage is "fair
use rights." If you mean "... without infringing on the end users
rights, except for fair use rights", then you should say that.


Call it what you like; still, I cannot be infringing on your right when
such a right does not exist to be infringed. If you want to term it a
'right', feel free, but that's not what you're granted under US law or
the Berne Convention. The 'common usage' here leads to a
misinterpretation of what you're entitled to. What is actually stated
is a limitation on the copyright holder's exclusive rights, which is a
very different matter.

--
Ben Sizer

Nov 22 '05 #127
On 17 Nov 2005 01:29:23 -0800, Ben Sizer <ky*****@gmail.com> wrote:
Mike Meyer wrote:
Are you claiming therefore that it's more acceptable to you to have to
access the data remotely every time you use the software than once per
install?
Alex's solution doesn't require special treatment for disaster
recovery and/or planning, and as such is a valid answer to the
question. It may be unacceptable for *other* reasons, but it beats
dictating a disaster recovery plan for your software to the end user
hands down on that basis.


Sorry, I just don't see this as being a significant difference that
makes 'access-always' acceptable and 'access rarely' unacceptable.
No, I am just pointing out that you are mixing up the concept of an
actual 'right' such as one embodied in a state's constitution, with an
implied 'right' that is just an exemption from committing an offence.
The term 'right' does not even appear in the relevant part of US
copyright law, except to state that it is a limitation on the copyright
holder's rights.


You're still just playing semantic games. The common usage is "fair
use rights." If you mean "... without infringing on the end users
rights, except for fair use rights", then you should say that.


Call it what you like; still, I cannot be infringing on your right when
such a right does not exist to be infringed. If you want to term it a
'right', feel free, but that's not what you're granted under US law or
the Berne Convention. The 'common usage' here leads to a
misinterpretation of what you're entitled to. What is actually stated
is a limitation on the copyright holder's exclusive rights, which is a
very different matter.


Do you have a legal right to wear an orange shirt? To sit backwards in
your chair?

Your rights are anything you can do that is not forbidden - the US
constitution is explicitly designed in this way, something that people
often forget. There is no difference between an "explicit" and an
"inferred" right, by design. If you read the text of the US
constitution, even the phrasing makes this clear. In fact, the format
of the Bill of Rights was considered harmful by some of the founding
fathers, because they felt that people would interpert it exactly as
they have - as enumerating the limits of government power, instead of
enumerating the powers themselves.

Therefore, by extending your rights via DRM, you are infringing upon
the rights of your customers. I *do* have a legal right to backup and
archive my purchased software, just as I have a legal right to write
this email, and I have a legal right to watch a DVD I've purchased.

The very basis of law is about drawing lines between where one persons
rights end and another begins. What (all current) DRM solutions do is
extend the rights of the copyright holder. This, *by definition*,
comes at the expense of the rights of everyone else. The granting of
copyright at all is a limitation of the rights of everyone else, but
one that is legally enshrined and that people (generally) accept as
being in the common good. It's very similar to the concept of private
property. The law grants you the right to keep people from walking
into your house. However, if you decide to build an electric fence
across the sidewalk and into the street, you are infringing on thier
right to walk past.

Further, if you decide to define "right" as something that is
specifically legally protected, then you have no right to embed DRM in
your software at all - you are *explicitly* expanding the rights
granted to you by copyright law, and you do not have explicit legal
rights to do that. Further, you do not have explicit legal rights to
prevent fair-use copying.

--
Ben Sizer

--
http://mail.python.org/mailman/listinfo/python-list

Nov 22 '05 #128
On 17 Nov 2005 01:29:23 -0800, Ben Sizer <ky*****@gmail.com> wrote:
Mike Meyer wrote:
Are you claiming therefore that it's more acceptable to you to have to
access the data remotely every time you use the software than once per
install?
Alex's solution doesn't require special treatment for disaster
recovery and/or planning, and as such is a valid answer to the
question. It may be unacceptable for *other* reasons, but it beats
dictating a disaster recovery plan for your software to the end user
hands down on that basis.


Sorry, I just don't see this as being a significant difference that
makes 'access-always' acceptable and 'access rarely' unacceptable.
No, I am just pointing out that you are mixing up the concept of an
actual 'right' such as one embodied in a state's constitution, with an
implied 'right' that is just an exemption from committing an offence.
The term 'right' does not even appear in the relevant part of US
copyright law, except to state that it is a limitation on the copyright
holder's rights.


You're still just playing semantic games. The common usage is "fair
use rights." If you mean "... without infringing on the end users
rights, except for fair use rights", then you should say that.


Call it what you like; still, I cannot be infringing on your right when
such a right does not exist to be infringed. If you want to term it a
'right', feel free, but that's not what you're granted under US law or
the Berne Convention. The 'common usage' here leads to a
misinterpretation of what you're entitled to. What is actually stated
is a limitation on the copyright holder's exclusive rights, which is a
very different matter.


Do you have a legal right to wear an orange shirt? To sit backwards in
your chair?

Your rights are anything you can do that is not forbidden - the US
constitution is explicitly designed in this way, something that people
often forget. There is no difference between an "explicit" and an
"inferred" right, by design. If you read the text of the US
constitution, even the phrasing makes this clear. In fact, the format
of the Bill of Rights was considered harmful by some of the founding
fathers, because they felt that people would interpert it exactly as
they have - as enumerating the limits of government power, instead of
enumerating the powers themselves.

Therefore, by extending your rights via DRM, you are infringing upon
the rights of your customers. I *do* have a legal right to backup and
archive my purchased software, just as I have a legal right to write
this email, and I have a legal right to watch a DVD I've purchased.

The very basis of law is about drawing lines between where one persons
rights end and another begins. What (all current) DRM solutions do is
extend the rights of the copyright holder. This, *by definition*,
comes at the expense of the rights of everyone else. The granting of
copyright at all is a limitation of the rights of everyone else, but
one that is legally enshrined and that people (generally) accept as
being in the common good. It's very similar to the concept of private
property. The law grants you the right to keep people from walking
into your house. However, if you decide to build an electric fence
across the sidewalk and into the street, you are infringing on thier
right to walk past.

Further, if you decide to define "right" as something that is
specifically legally protected, then you have no right to embed DRM in
your software at all - you are *explicitly* expanding the rights
granted to you by copyright law, and you do not have explicit legal
rights to do that. Further, you do not have explicit legal rights to
prevent fair-use copying.

--
Ben Sizer

--
http://mail.python.org/mailman/listinfo/python-list

Nov 22 '05 #129
Alex Martelli wrote:
Modern equivalent of serialization (publishing one chapter at a time on
the web, the next chapter to come only if the author receives enough
payment for the previous one) have been attempted, but without much
success so far; however, the holy grail of "micropayments" might yet
afford a rebirth for such a model -- if paying for a chapter was
extremely convenient and cheap, enough people might choose to do so
rather than risk the next chapter never appearing. Remember that, by
totally disintermediating publishers and bookstores, a novelist may
require maybe 1/10th of what the book would need to gross in stores, in
order to end up with the same amount of cash in his or her pockets.

One could go on for a long time, but the key point is that there may or
may not exist viable monetization models for all sorts of endeavours,
including the writing of novels, depending on a lot of other issues of
social as well as legal structures. Let's not be blinded by one model
that has worked sort of decently for a small time in certain sets of
conditions, into believing that model is the only workable one today or
tomorrow, with conditions that may be in fact very different.


Maybe this micropayment thing is already working and active. What is
the cost of a mouseclick and what is the monetarial value of the fact
that someone is clicking on a link? Someone bought virtual property for
real money and sold it later with a lot of profit. There are pages
where one can buy pixels. Maybe me replying to you will provoke some
other chain of events with payoffs for you or me (I hope positive :-)

The idea of using a webservice to hide essential secret parts of your
application can only work well if one makes some random alterations to
the results of the queries. Like GPS signals that are deliberately made
less exact. Obfuscated Python code could for example return variable
precision numbers with a slight random alteration. I think such things
would make it harder to reverse engineer the code behind the server.

But the more one messes with the ideal output the more often the user
will rather click another link. (or launch another satellite)

Anton.

what's the current exchange rate for clicks and dollars?

Nov 22 '05 #130
Alex Martelli wrote:
Modern equivalent of serialization (publishing one chapter at a time on
the web, the next chapter to come only if the author receives enough
payment for the previous one) have been attempted, but without much
success so far; however, the holy grail of "micropayments" might yet
afford a rebirth for such a model -- if paying for a chapter was
extremely convenient and cheap, enough people might choose to do so
rather than risk the next chapter never appearing. Remember that, by
totally disintermediating publishers and bookstores, a novelist may
require maybe 1/10th of what the book would need to gross in stores, in
order to end up with the same amount of cash in his or her pockets.

One could go on for a long time, but the key point is that there may or
may not exist viable monetization models for all sorts of endeavours,
including the writing of novels, depending on a lot of other issues of
social as well as legal structures. Let's not be blinded by one model
that has worked sort of decently for a small time in certain sets of
conditions, into believing that model is the only workable one today or
tomorrow, with conditions that may be in fact very different.


Maybe this micropayment thing is already working and active. What is
the cost of a mouseclick and what is the monetarial value of the fact
that someone is clicking on a link? Someone bought virtual property for
real money and sold it later with a lot of profit. There are pages
where one can buy pixels. Maybe me replying to you will provoke some
other chain of events with payoffs for you or me (I hope positive :-)

The idea of using a webservice to hide essential secret parts of your
application can only work well if one makes some random alterations to
the results of the queries. Like GPS signals that are deliberately made
less exact. Obfuscated Python code could for example return variable
precision numbers with a slight random alteration. I think such things
would make it harder to reverse engineer the code behind the server.

But the more one messes with the ideal output the more often the user
will rather click another link. (or launch another satellite)

Anton.

what's the current exchange rate for clicks and dollars?

Nov 22 '05 #131
In message <11**********************@o13g2000cwo.googlegroups .com>, The
Eternal Squire <et***********@comcast.net> writes
My point exactly. A good application of moderate to large size (100K
lines of code) is about as large as a single person can write without
automation,


You have not been working with the right people. They do exist, but they
are rare.

Stephen
--
Stephen Kellett
Object Media Limited http://www.objmedia.demon.co.uk/software.html
Computer Consultancy, Software Development
Windows C++, Java, Assembler, Performance Analysis, Troubleshooting
Nov 22 '05 #132
In message <11**********************@o13g2000cwo.googlegroups .com>, The
Eternal Squire <et***********@comcast.net> writes
My point exactly. A good application of moderate to large size (100K
lines of code) is about as large as a single person can write without
automation,


You have not been working with the right people. They do exist, but they
are rare.

Stephen
--
Stephen Kellett
Object Media Limited http://www.objmedia.demon.co.uk/software.html
Computer Consultancy, Software Development
Windows C++, Java, Assembler, Performance Analysis, Troubleshooting
Nov 22 '05 #133
Chris Mellon <ar*****@gmail.com> writes:
Your rights are anything you can do that is not forbidden - the US
constitution is explicitly designed in this way, something that people
often forget. There is no difference between an "explicit" and an
"inferred" right, by design. If you read the text of the US
constitution, even the phrasing makes this clear. In fact, the format
of the Bill of Rights was considered harmful by some of the founding
fathers, because they felt that people would interpert it exactly as
they have - as enumerating the limits of government power, instead of
enumerating the powers themselves.


It should be noted that the 10th amendment was added to the bill of
rights to deal with this fear. It states explicitly that any rights
not listed were reserved to the states or the public.

<mike
--
Mike Meyer <mw*@mired.org> http://www.mired.org/home/mwm/
Independent WWW/Perforce/FreeBSD/Unix consultant, email for more information.
Nov 22 '05 #134
Chris Mellon <ar*****@gmail.com> writes:
Your rights are anything you can do that is not forbidden - the US
constitution is explicitly designed in this way, something that people
often forget. There is no difference between an "explicit" and an
"inferred" right, by design. If you read the text of the US
constitution, even the phrasing makes this clear. In fact, the format
of the Bill of Rights was considered harmful by some of the founding
fathers, because they felt that people would interpert it exactly as
they have - as enumerating the limits of government power, instead of
enumerating the powers themselves.


It should be noted that the 10th amendment was added to the bill of
rights to deal with this fear. It states explicitly that any rights
not listed were reserved to the states or the public.

<mike
--
Mike Meyer <mw*@mired.org> http://www.mired.org/home/mwm/
Independent WWW/Perforce/FreeBSD/Unix consultant, email for more information.
Nov 22 '05 #135
Anton Vredegoor <an*************@gmail.com> wrote:
...
Maybe this micropayment thing is already working and active. What is
the cost of a mouseclick and what is the monetarial value of the fact
that someone is clicking on a link? Someone bought virtual property for
I believe that all of the currently dominant models for pricing in this
field are based on auctions -- each would-be advertiser bids how many
cents (or dollars;-) a click-through into their advertisement is worth
to them, Google or one of our competitors shows the "highest bidding"
ads for a query or site (adjusted by all sort of factors, such as for
example the click-through rates), and money changes "hands" only if a
click-through does happen (the amount of money involved may be the
amount that was bid, or a lesser one based for example on a
"second-price auction" or other mechanisms yet -- there's a lot of
fascinating economic literature on auction mechanisms and the various
effects slightly different mechanisms may have).

If an auction mechanism is well-designed and tuned in minute detail, it
will presumably determine accurately the "monetarial value of [a click]
on a link" when that link is an ad paid for by such a mechanism. Value
of clicks on other kinds of links is harder to compute, of course, since
the monetization may be extremely indirect, if one exists at all.
real money and sold it later with a lot of profit. There are pages
where one can buy pixels. Maybe me replying to you will provoke some
other chain of events with payoffs for you or me (I hope positive :-)
Maybe -- but you'd have to estimate the probabilities in order to
estimate the expected payoffs;-).
The idea of using a webservice to hide essential secret parts of your
application can only work well if one makes some random alterations to
the results of the queries. Like GPS signals that are deliberately made
I disagree on this general statement and I have already given two
counterexamples:

a. a webservice which, for some amount X of money, gives an excellent
heuristic estimate of a good cutting-path for a woodcutting tool (for a
set of shapes to be cut out of standard-sized planks of wood by a
numerically driven cutter): this is a case where ESR, acting as a
consultant, advised his clients (who had developed a heuristic for this
task which saved a lot of wood compared to their competitors') to keep
their code closed-source, and it makes a good use case for the "hide
essential secret parts" in general;

b. a (hypothetical) website that, given time-space coordinates (and some
amount Y of money), produces and returns weather predictions that are
better than those you can get from its competitors.

It appears to me that any application of this kind could work well
without at all "making random alterations" to whatever. Point is, if
you develop a better algorithm (or, more likely, heuristic) for good
solutions to such problems, or predictions of just about anything which
might have economic value to somebody, using a webservice to hide the
essential secret parts of your discovery is an option, and it might be a
preferable alternative to relying on patents (since software patents may
not be enforceable everywhere in the world, and even where they're
nominally enforceable it could prove problematic and costly to actually
deter all would-be competitors from undercutting you). I do not see
anything in your post that contradicts this, except the bare unsupported
assertion that a webservice "can only work well if one makes random
alterations".
But the more one messes with the ideal output the more often the user
will rather click another link. (or launch another satellite)
Of course. If my "better weather predictor" is in fact based not on
inventing some new algorithm/heuristic, but on having better or more
abundant raw data due to my private network of satellites or other
observation platforms, this doesn't change the economic situation by all
that much (except that patenting may not even be an option in the latter
case, if there's no patentable innovation in that private network); a
competitor *could* reach or surpass my predictions' quality by investing
enough to re-develop the heuristic or duplicate the sensors-network.
So, my pricing should probably take that risk into account.

Deliberately giving predictions worse than I could have given, in this
context, seems a deliberate self-sabotage without any return.
what's the current exchange rate for clicks and dollars?


As far as I know, it varies wildly depending on the context, but I
suspect you can find ranges of estimates on the web.
Alex
Nov 22 '05 #136
Anton Vredegoor <an*************@gmail.com> wrote:
...
Maybe this micropayment thing is already working and active. What is
the cost of a mouseclick and what is the monetarial value of the fact
that someone is clicking on a link? Someone bought virtual property for
I believe that all of the currently dominant models for pricing in this
field are based on auctions -- each would-be advertiser bids how many
cents (or dollars;-) a click-through into their advertisement is worth
to them, Google or one of our competitors shows the "highest bidding"
ads for a query or site (adjusted by all sort of factors, such as for
example the click-through rates), and money changes "hands" only if a
click-through does happen (the amount of money involved may be the
amount that was bid, or a lesser one based for example on a
"second-price auction" or other mechanisms yet -- there's a lot of
fascinating economic literature on auction mechanisms and the various
effects slightly different mechanisms may have).

If an auction mechanism is well-designed and tuned in minute detail, it
will presumably determine accurately the "monetarial value of [a click]
on a link" when that link is an ad paid for by such a mechanism. Value
of clicks on other kinds of links is harder to compute, of course, since
the monetization may be extremely indirect, if one exists at all.
real money and sold it later with a lot of profit. There are pages
where one can buy pixels. Maybe me replying to you will provoke some
other chain of events with payoffs for you or me (I hope positive :-)
Maybe -- but you'd have to estimate the probabilities in order to
estimate the expected payoffs;-).
The idea of using a webservice to hide essential secret parts of your
application can only work well if one makes some random alterations to
the results of the queries. Like GPS signals that are deliberately made
I disagree on this general statement and I have already given two
counterexamples:

a. a webservice which, for some amount X of money, gives an excellent
heuristic estimate of a good cutting-path for a woodcutting tool (for a
set of shapes to be cut out of standard-sized planks of wood by a
numerically driven cutter): this is a case where ESR, acting as a
consultant, advised his clients (who had developed a heuristic for this
task which saved a lot of wood compared to their competitors') to keep
their code closed-source, and it makes a good use case for the "hide
essential secret parts" in general;

b. a (hypothetical) website that, given time-space coordinates (and some
amount Y of money), produces and returns weather predictions that are
better than those you can get from its competitors.

It appears to me that any application of this kind could work well
without at all "making random alterations" to whatever. Point is, if
you develop a better algorithm (or, more likely, heuristic) for good
solutions to such problems, or predictions of just about anything which
might have economic value to somebody, using a webservice to hide the
essential secret parts of your discovery is an option, and it might be a
preferable alternative to relying on patents (since software patents may
not be enforceable everywhere in the world, and even where they're
nominally enforceable it could prove problematic and costly to actually
deter all would-be competitors from undercutting you). I do not see
anything in your post that contradicts this, except the bare unsupported
assertion that a webservice "can only work well if one makes random
alterations".
But the more one messes with the ideal output the more often the user
will rather click another link. (or launch another satellite)
Of course. If my "better weather predictor" is in fact based not on
inventing some new algorithm/heuristic, but on having better or more
abundant raw data due to my private network of satellites or other
observation platforms, this doesn't change the economic situation by all
that much (except that patenting may not even be an option in the latter
case, if there's no patentable innovation in that private network); a
competitor *could* reach or surpass my predictions' quality by investing
enough to re-develop the heuristic or duplicate the sensors-network.
So, my pricing should probably take that risk into account.

Deliberately giving predictions worse than I could have given, in this
context, seems a deliberate self-sabotage without any return.
what's the current exchange rate for clicks and dollars?


As far as I know, it varies wildly depending on the context, but I
suspect you can find ranges of estimates on the web.
Alex
Nov 22 '05 #137
On Thu, 17 Nov 2005 10:53:24 -0800, al***@mail.comcast.net (Alex Martelli) wrote:
Anton Vredegoor <an*************@gmail.com> wrote: [...]
The idea of using a webservice to hide essential secret parts of your
application can only work well if one makes some random alterations to
the results of the queries. Like GPS signals that are deliberately made


I disagree on this general statement and I have already given two
counterexamples:

I agree with your disagreement in general, but I think Antoon may be
alluding to the "covert channel" problem, where sometimes randomization
of an external observable is a defense. E.g., if a web site login process
responds faster with a rejection of a bad user name (i.e. is not in the authorized
user list) than it does for a valid user name and a bad password, the timing
difference can be used over time to eke out the private user name list, and
make subsequent password attacks that much easier.

The time difference of course will be degraded with noise, but if the signal
is there (user is/isn't valid), it can be extracted, given time for
statistics -- which of course leads to the defense of only so many
tries per some time interval per username. The point re radomization
is that in this example the covert information channel is variation in time
and after introducing enough artifical delay in the faster paths to make all
approximately equal, an added random delay can pretty much wipe out the channel.

As to covert channels revealing the particulars of a secret algorithm used
to calculate optimum wood cutting or do excellent weather prediction, I'd
say social engineering is probably an easier attack, and a well designed
sequence of problems presented to the wood cutting site would probably
have more information in the answers than in any other observables I can think of.

Which perhaps gets towards Antoon's point (or my projection thereof ;-) -- i.e.,
that the anwers provided in an experimental probe of an algorithm are "signal"
for what you want to detect, and randomization may put noise in the signal to
defeat detection (even though enough noise might make the algorithm output unsaleable ;-)

a. a webservice which, for some amount X of money, gives an excellent
heuristic estimate of a good cutting-path for a woodcutting tool (for a
set of shapes to be cut out of standard-sized planks of wood by a
numerically driven cutter): this is a case where ESR, acting as a
consultant, advised his clients (who had developed a heuristic for this
task which saved a lot of wood compared to their competitors') to keep
their code closed-source, and it makes a good use case for the "hide
essential secret parts" in general;

b. a (hypothetical) website that, given time-space coordinates (and some
amount Y of money), produces and returns weather predictions that are
better than those you can get from its competitors.

It appears to me that any application of this kind could work well
without at all "making random alterations" to whatever. Point is, if
you develop a better algorithm (or, more likely, heuristic) for good
solutions to such problems, or predictions of just about anything which
might have economic value to somebody, using a webservice to hide the
essential secret parts of your discovery is an option, and it might be a
preferable alternative to relying on patents (since software patents may
not be enforceable everywhere in the world, and even where they're
nominally enforceable it could prove problematic and costly to actually
deter all would-be competitors from undercutting you). I do not see
anything in your post that contradicts this, except the bare unsupported
assertion that a webservice "can only work well if one makes random
alterations". Yes, IMO that was an overgeneralization of an idea that may however have
some actual narrow applicability.
But the more one messes with the ideal output the more often the user
will rather click another link. (or launch another satellite)


Of course. If my "better weather predictor" is in fact based not on
inventing some new algorithm/heuristic, but on having better or more
abundant raw data due to my private network of satellites or other
observation platforms, this doesn't change the economic situation by all
that much (except that patenting may not even be an option in the latter
case, if there's no patentable innovation in that private network); a
competitor *could* reach or surpass my predictions' quality by investing
enough to re-develop the heuristic or duplicate the sensors-network.
So, my pricing should probably take that risk into account.

Deliberately giving predictions worse than I could have given, in this
context, seems a deliberate self-sabotage without any return.
what's the current exchange rate for clicks and dollars?


As far as I know, it varies wildly depending on the context, but I
suspect you can find ranges of estimates on the web.

The growth of virtual worlds with virtual money and virtual/"real"
currency exchange is interesting. People are actually making real
money investing in and developing virtual real estate and selling
virtual currency profits for real-world money ;-)

Regards,
Bengt Richter
Nov 22 '05 #138
On Thu, 17 Nov 2005 10:53:24 -0800, al***@mail.comcast.net (Alex Martelli) wrote:
Anton Vredegoor <an*************@gmail.com> wrote: [...]
The idea of using a webservice to hide essential secret parts of your
application can only work well if one makes some random alterations to
the results of the queries. Like GPS signals that are deliberately made


I disagree on this general statement and I have already given two
counterexamples:

I agree with your disagreement in general, but I think Antoon may be
alluding to the "covert channel" problem, where sometimes randomization
of an external observable is a defense. E.g., if a web site login process
responds faster with a rejection of a bad user name (i.e. is not in the authorized
user list) than it does for a valid user name and a bad password, the timing
difference can be used over time to eke out the private user name list, and
make subsequent password attacks that much easier.

The time difference of course will be degraded with noise, but if the signal
is there (user is/isn't valid), it can be extracted, given time for
statistics -- which of course leads to the defense of only so many
tries per some time interval per username. The point re radomization
is that in this example the covert information channel is variation in time
and after introducing enough artifical delay in the faster paths to make all
approximately equal, an added random delay can pretty much wipe out the channel.

As to covert channels revealing the particulars of a secret algorithm used
to calculate optimum wood cutting or do excellent weather prediction, I'd
say social engineering is probably an easier attack, and a well designed
sequence of problems presented to the wood cutting site would probably
have more information in the answers than in any other observables I can think of.

Which perhaps gets towards Antoon's point (or my projection thereof ;-) -- i.e.,
that the anwers provided in an experimental probe of an algorithm are "signal"
for what you want to detect, and randomization may put noise in the signal to
defeat detection (even though enough noise might make the algorithm output unsaleable ;-)

a. a webservice which, for some amount X of money, gives an excellent
heuristic estimate of a good cutting-path for a woodcutting tool (for a
set of shapes to be cut out of standard-sized planks of wood by a
numerically driven cutter): this is a case where ESR, acting as a
consultant, advised his clients (who had developed a heuristic for this
task which saved a lot of wood compared to their competitors') to keep
their code closed-source, and it makes a good use case for the "hide
essential secret parts" in general;

b. a (hypothetical) website that, given time-space coordinates (and some
amount Y of money), produces and returns weather predictions that are
better than those you can get from its competitors.

It appears to me that any application of this kind could work well
without at all "making random alterations" to whatever. Point is, if
you develop a better algorithm (or, more likely, heuristic) for good
solutions to such problems, or predictions of just about anything which
might have economic value to somebody, using a webservice to hide the
essential secret parts of your discovery is an option, and it might be a
preferable alternative to relying on patents (since software patents may
not be enforceable everywhere in the world, and even where they're
nominally enforceable it could prove problematic and costly to actually
deter all would-be competitors from undercutting you). I do not see
anything in your post that contradicts this, except the bare unsupported
assertion that a webservice "can only work well if one makes random
alterations". Yes, IMO that was an overgeneralization of an idea that may however have
some actual narrow applicability.
But the more one messes with the ideal output the more often the user
will rather click another link. (or launch another satellite)


Of course. If my "better weather predictor" is in fact based not on
inventing some new algorithm/heuristic, but on having better or more
abundant raw data due to my private network of satellites or other
observation platforms, this doesn't change the economic situation by all
that much (except that patenting may not even be an option in the latter
case, if there's no patentable innovation in that private network); a
competitor *could* reach or surpass my predictions' quality by investing
enough to re-develop the heuristic or duplicate the sensors-network.
So, my pricing should probably take that risk into account.

Deliberately giving predictions worse than I could have given, in this
context, seems a deliberate self-sabotage without any return.
what's the current exchange rate for clicks and dollars?


As far as I know, it varies wildly depending on the context, but I
suspect you can find ranges of estimates on the web.

The growth of virtual worlds with virtual money and virtual/"real"
currency exchange is interesting. People are actually making real
money investing in and developing virtual real estate and selling
virtual currency profits for real-world money ;-)

Regards,
Bengt Richter
Nov 22 '05 #139
>You have not been working with the right people. They do exist, but they
are rare.


Elucidate?

Nov 22 '05 #140
>You have not been working with the right people. They do exist, but they
are rare.


Elucidate?

Nov 22 '05 #141

Bengt Richter wrote:
On Thu, 17 Nov 2005 10:53:24 -0800, al***@mail.comcast.net (Alex Martelli) wrote:
Anton Vredegoor <an*************@gmail.com> wrote: [...]
The idea of using a webservice to hide essential secret parts of your
application can only work well if one makes some random alterations to
the results of the queries. Like GPS signals that are deliberately made


I disagree on this general statement and I have already given two
counterexamples:

I agree with your disagreement in general, but I think Antoon may be
alluding to the "covert channel" problem, where sometimes randomization
of an external observable is a defense. E.g., if a web site login process
responds faster with a rejection of a bad user name (i.e. is not in the authorized
user list) than it does for a valid user name and a bad password, the timing
difference can be used over time to eke out the private user name list, and
make subsequent password attacks that much easier.


Pardon me, but I'm Anton, not Antoon (well maybe I am but lets keep
this distinction in order to avoid mental hash collisions)

I agree with Alex and Bengt that my statement was too general and I
even admit that as I wrote it down the thought of making it less
provocative crossed my mind . However I felt safe because I wrote 'only
work *well*' instead of 'only work *if*' and what is working well is
open for discussion isn't it? Further in my post I wrote something
about adding random fluctuations making it harder to reverse engineer a
procedure so I felt even safer. Not so with Alex's thorough analysis
though :-)

What was mostly on my mind (but I didn't mention it) is that for
something to be commercially viable there should be some kind of
pricing strategy (NB in our current economic view of the world) where a
better paying user gets a vip interface and poor people get the
standard treatment.

Since one has to have the optimal result anyway in order to sell it to
the best payers it would be impractical to recompute less accurate
values. Why not just add a random part to make it less valuable for the
unpaying user? I'm thinking about things like specifiying a real value
interval where the user can extract data from (this is also a data
compression method, see arithmetic coding for more info).

<snip>
Which perhaps gets towards Antoon's point (or my projection thereof ;-) -- i.e.,
that the anwers provided in an experimental probe of an algorithm are "signal"
for what you want to detect, and randomization may put noise in the signal to
defeat detection (even though enough noise might make the algorithm output unsaleable ;-)


Yeah, sometimes people measure temperature fluctuactions in the CPU in
order to get clues about how an algorithm works :-) But in fact my mind
works more like some intuitive device that suggests that maybe some
point is safe enough to post or not, without always thinking through
all the details.

a. a webservice which, for some amount X of money, gives an excellent
heuristic estimate of a good cutting-path for a woodcutting tool (for a
set of shapes to be cut out of standard-sized planks of wood by a
numerically driven cutter): this is a case where ESR, acting as a
consultant, advised his clients (who had developed a heuristic for this
task which saved a lot of wood compared to their competitors') to keep
their code closed-source, and it makes a good use case for the "hide
essential secret parts" in general;

If the heuristic always gives the same answer to the same problem it
would be easier to predict the results. Oh no, now some mathematician
surely will prove me wrong :-)
b. a (hypothetical) website that, given time-space coordinates (and some
amount Y of money), produces and returns weather predictions that are
better than those you can get from its competitors.

It appears to me that any application of this kind could work well
without at all "making random alterations" to whatever. Point is, if
you develop a better algorithm (or, more likely, heuristic) for good
solutions to such problems, or predictions of just about anything which
might have economic value to somebody, using a webservice to hide the
essential secret parts of your discovery is an option, and it might be a
preferable alternative to relying on patents (since software patents may
not be enforceable everywhere in the world, and even where they're
nominally enforceable it could prove problematic and costly to actually
deter all would-be competitors from undercutting you). I do not see
anything in your post that contradicts this, except the bare unsupported
assertion that a webservice "can only work well if one makes random
alterations".

Yes, IMO that was an overgeneralization of an idea that may however have
some actual narrow applicability.


Ok. Although it's a bit tricky to prove this by using an example where
the randomness is already in the problem from the start. If one groups
very chaotic processes in the same category as random processes of
course.
But the more one messes with the ideal output the more often the user
will rather click another link. (or launch another satellite)


Of course. If my "better weather predictor" is in fact based not on
inventing some new algorithm/heuristic, but on having better or more
abundant raw data due to my private network of satellites or other
observation platforms, this doesn't change the economic situation by all
that much (except that patenting may not even be an option in the latter
case, if there's no patentable innovation in that private network); a
competitor *could* reach or surpass my predictions' quality by investing
enough to re-develop the heuristic or duplicate the sensors-network.
So, my pricing should probably take that risk into account.

Deliberately giving predictions worse than I could have given, in this
context, seems a deliberate self-sabotage without any return.

Not always, for example with a gradient in user status according to how
much they pay. Note that I don't agree at all with such practice, but
I'm trying to explain how money is made now instead of thinking about
how it should be made.
what's the current exchange rate for clicks and dollars?


As far as I know, it varies wildly depending on the context, but I
suspect you can find ranges of estimates on the web.

The growth of virtual worlds with virtual money and virtual/"real"
currency exchange is interesting. People are actually making real
money investing in and developing virtual real estate and selling
virtual currency profits for real-world money ;-)


Yes. Someday our past will be just a variation from the ideal
development that was retroactively fitted to the state the future is
in.

Nice to be speaking to you both.

Anton

Nov 22 '05 #142

Bengt Richter wrote:
On Thu, 17 Nov 2005 10:53:24 -0800, al***@mail.comcast.net (Alex Martelli) wrote:
Anton Vredegoor <an*************@gmail.com> wrote: [...]
The idea of using a webservice to hide essential secret parts of your
application can only work well if one makes some random alterations to
the results of the queries. Like GPS signals that are deliberately made


I disagree on this general statement and I have already given two
counterexamples:

I agree with your disagreement in general, but I think Antoon may be
alluding to the "covert channel" problem, where sometimes randomization
of an external observable is a defense. E.g., if a web site login process
responds faster with a rejection of a bad user name (i.e. is not in the authorized
user list) than it does for a valid user name and a bad password, the timing
difference can be used over time to eke out the private user name list, and
make subsequent password attacks that much easier.


Pardon me, but I'm Anton, not Antoon (well maybe I am but lets keep
this distinction in order to avoid mental hash collisions)

I agree with Alex and Bengt that my statement was too general and I
even admit that as I wrote it down the thought of making it less
provocative crossed my mind . However I felt safe because I wrote 'only
work *well*' instead of 'only work *if*' and what is working well is
open for discussion isn't it? Further in my post I wrote something
about adding random fluctuations making it harder to reverse engineer a
procedure so I felt even safer. Not so with Alex's thorough analysis
though :-)

What was mostly on my mind (but I didn't mention it) is that for
something to be commercially viable there should be some kind of
pricing strategy (NB in our current economic view of the world) where a
better paying user gets a vip interface and poor people get the
standard treatment.

Since one has to have the optimal result anyway in order to sell it to
the best payers it would be impractical to recompute less accurate
values. Why not just add a random part to make it less valuable for the
unpaying user? I'm thinking about things like specifiying a real value
interval where the user can extract data from (this is also a data
compression method, see arithmetic coding for more info).

<snip>
Which perhaps gets towards Antoon's point (or my projection thereof ;-) -- i.e.,
that the anwers provided in an experimental probe of an algorithm are "signal"
for what you want to detect, and randomization may put noise in the signal to
defeat detection (even though enough noise might make the algorithm output unsaleable ;-)


Yeah, sometimes people measure temperature fluctuactions in the CPU in
order to get clues about how an algorithm works :-) But in fact my mind
works more like some intuitive device that suggests that maybe some
point is safe enough to post or not, without always thinking through
all the details.

a. a webservice which, for some amount X of money, gives an excellent
heuristic estimate of a good cutting-path for a woodcutting tool (for a
set of shapes to be cut out of standard-sized planks of wood by a
numerically driven cutter): this is a case where ESR, acting as a
consultant, advised his clients (who had developed a heuristic for this
task which saved a lot of wood compared to their competitors') to keep
their code closed-source, and it makes a good use case for the "hide
essential secret parts" in general;

If the heuristic always gives the same answer to the same problem it
would be easier to predict the results. Oh no, now some mathematician
surely will prove me wrong :-)
b. a (hypothetical) website that, given time-space coordinates (and some
amount Y of money), produces and returns weather predictions that are
better than those you can get from its competitors.

It appears to me that any application of this kind could work well
without at all "making random alterations" to whatever. Point is, if
you develop a better algorithm (or, more likely, heuristic) for good
solutions to such problems, or predictions of just about anything which
might have economic value to somebody, using a webservice to hide the
essential secret parts of your discovery is an option, and it might be a
preferable alternative to relying on patents (since software patents may
not be enforceable everywhere in the world, and even where they're
nominally enforceable it could prove problematic and costly to actually
deter all would-be competitors from undercutting you). I do not see
anything in your post that contradicts this, except the bare unsupported
assertion that a webservice "can only work well if one makes random
alterations".

Yes, IMO that was an overgeneralization of an idea that may however have
some actual narrow applicability.


Ok. Although it's a bit tricky to prove this by using an example where
the randomness is already in the problem from the start. If one groups
very chaotic processes in the same category as random processes of
course.
But the more one messes with the ideal output the more often the user
will rather click another link. (or launch another satellite)


Of course. If my "better weather predictor" is in fact based not on
inventing some new algorithm/heuristic, but on having better or more
abundant raw data due to my private network of satellites or other
observation platforms, this doesn't change the economic situation by all
that much (except that patenting may not even be an option in the latter
case, if there's no patentable innovation in that private network); a
competitor *could* reach or surpass my predictions' quality by investing
enough to re-develop the heuristic or duplicate the sensors-network.
So, my pricing should probably take that risk into account.

Deliberately giving predictions worse than I could have given, in this
context, seems a deliberate self-sabotage without any return.

Not always, for example with a gradient in user status according to how
much they pay. Note that I don't agree at all with such practice, but
I'm trying to explain how money is made now instead of thinking about
how it should be made.
what's the current exchange rate for clicks and dollars?


As far as I know, it varies wildly depending on the context, but I
suspect you can find ranges of estimates on the web.

The growth of virtual worlds with virtual money and virtual/"real"
currency exchange is interesting. People are actually making real
money investing in and developing virtual real estate and selling
virtual currency profits for real-world money ;-)


Yes. Someday our past will be just a variation from the ideal
development that was retroactively fitted to the state the future is
in.

Nice to be speaking to you both.

Anton

Nov 22 '05 #143
Anton Vredegoor <an*************@gmail.com> wrote:
...
What was mostly on my mind (but I didn't mention it) is that for
something to be commercially viable there should be some kind of
pricing strategy (NB in our current economic view of the world) where a
better paying user gets a vip interface and poor people get the
standard treatment.
Some fields work well with such market segmentation, but others work
perfectly well without it. iTunes songs are 99 cents (for USA
residents; there IS some segmentation by national markets, imposed on
Apple by the music industry) whoever is buying them; I personally think
it would hurt iTunes' business model if the 99-cents song was a "cheap
version" and you could choose to "upgrade" to a better-sounding one for
extra money -- giving the mass-market the perception that they're
getting inferior goods may adversely hurt sales and revenue.

Market segmentation strategies and tactics are of course a huge field of
study, both theoretical and pragmatic (and it's as infinitely
fascinating in the theoretical view, as potentially lucrative or ruinous
in practical application). It's definitely wrong to assume, as in your
statement above, that uniform pricing (no segmentation, at least not
along that axis) cannot work in a perfectly satisfactory way.

If the heuristic always gives the same answer to the same problem it
would be easier to predict the results. Oh no, now some mathematician
surely will prove me wrong :-)
"Easier" need not be a problem; even assuming that the heuristic uses no
aspect whatever of randomness, you may easily think of real-world cases
where ``reverse engineering'' the heuristic from its results is
computationally unfeasible anyway. Take the problem of controlling a NC
saw to cut a given set of shapes out of a standard-sized wood plank,
which is one of the real-world cases I mentioned. It doesn't seem to me
that trying to reverse-engineer a heuristic is any better than trying to
devise one (which may end up being better) from ingenuity and first
principles, even if you had thousands of outputs from the secret
heuristic at hand (and remember, getting each of this output costs you
money, which you have to pay to the webservice with the heuristic).

Ok. Although it's a bit tricky to prove this by using an example where
the randomness is already in the problem from the start. If one groups
very chaotic processes in the same category as random processes of
course.


Well, economically interesting prediction problems do tend to deal with
systems that are rich and deep and complex enough to qualify as chaotic,
if not random -- from weather to the price of oil, etc etc. But
problems of optimization under constraint, such as the NC saw one,
hardly qualify as such, it seems to me -- no randomness nor necessarily
any chaotic qualities in the problem, just utter computational
unfeasibility of algorithmic solutions and the attendand need to look
for "decently good" heuristics instead.
Deliberately giving predictions worse than I could have given, in this
context, seems a deliberate self-sabotage without any return.


Not always, for example with a gradient in user status according to how
much they pay. Note that I don't agree at all with such practice, but
I'm trying to explain how money is made now instead of thinking about
how it should be made.


Money is made in many ways, essentially by creating (perceived) buyer
advantage and capturing some part of it -- but market segmentation is
just one of many ways. IF your predictions are ENORMOUSLY better than
those the competition can make, then offering for free "slightly
damaged" predictions, that are still better than the competition's
despite the damage, MIGHT be a way to market your wares -- under a lot
of other assumptions, e.g., that there is actual demand for the best
predictions you can make, the ones you get paid for, so that your free
service doesn't undermine your for-pay one. It just seems unlikely that
all of these preconditions would be satisfied at the same time; better
to limit your "free" predictions along other axes, such as duration or
location, which doesn't require your predictions' accuracy advantage to
be ENORMOUS _and_ gives you a lot of control on "usefulness" of what
you're supplying for free -- damaging the quality by randomization just
seems to be unlikely to be the optimal strategy here, even if you had
determined (or were willing to bet the firm that) marked segmentation is
really the way to go here.

Analogy: say you make the best jams in the world and want to attract
customers by showing them that's the case via free samples. Your
randomization strategy seems analogous to: damage your jam's free
samples by adding tiny quantities of ingredients that degrade their
flavor -- if your degraded samples are still much better than the
competitors' jam, and there's effective demand for really "perfect" jam,
this strategy MIGHT work... but it seems a very, very far-fetched one
indeed. The NORMAL way to offer free samples to enhance, not damage,
the demand for your product, would be to limit the samples along
completely different axes -- damaging your product's quality
deliberately seems just about the LAST think you'd want to do; rather,
you'd offer, say, only tiny amounts for sampling, and already spread on
toast so they need to be tasted right on the spot, enticing the taster
to purchase a jar so they can have the amount of jam they choose at the
time and place of their choosing.

I hope this analogy clarifies why, while I don't think deliberate damage
of result quality can be entirely ruled out, I think it's extremely
unlikely to make any sense compared to ofher market segmentation
tactics, even if you DO grant that it's worth segmenting (free samples
are an extremely ancient and traditional tactic in all kind of food
selling situations, after all, and when well-designed and promoting a
product whose taste is indeed worth a premium price, they have been
repeatedly shown to be potentially quite effective -- so, I'm hoping
there will be no debate that the segmentation might perfectly well be
appropriate for this "analogy" case, whether it is or isn't in the
originally discussed case of selling predictions-via-webservices).
Alex
Nov 22 '05 #144
Anton Vredegoor <an*************@gmail.com> wrote:
...
What was mostly on my mind (but I didn't mention it) is that for
something to be commercially viable there should be some kind of
pricing strategy (NB in our current economic view of the world) where a
better paying user gets a vip interface and poor people get the
standard treatment.
Some fields work well with such market segmentation, but others work
perfectly well without it. iTunes songs are 99 cents (for USA
residents; there IS some segmentation by national markets, imposed on
Apple by the music industry) whoever is buying them; I personally think
it would hurt iTunes' business model if the 99-cents song was a "cheap
version" and you could choose to "upgrade" to a better-sounding one for
extra money -- giving the mass-market the perception that they're
getting inferior goods may adversely hurt sales and revenue.

Market segmentation strategies and tactics are of course a huge field of
study, both theoretical and pragmatic (and it's as infinitely
fascinating in the theoretical view, as potentially lucrative or ruinous
in practical application). It's definitely wrong to assume, as in your
statement above, that uniform pricing (no segmentation, at least not
along that axis) cannot work in a perfectly satisfactory way.

If the heuristic always gives the same answer to the same problem it
would be easier to predict the results. Oh no, now some mathematician
surely will prove me wrong :-)
"Easier" need not be a problem; even assuming that the heuristic uses no
aspect whatever of randomness, you may easily think of real-world cases
where ``reverse engineering'' the heuristic from its results is
computationally unfeasible anyway. Take the problem of controlling a NC
saw to cut a given set of shapes out of a standard-sized wood plank,
which is one of the real-world cases I mentioned. It doesn't seem to me
that trying to reverse-engineer a heuristic is any better than trying to
devise one (which may end up being better) from ingenuity and first
principles, even if you had thousands of outputs from the secret
heuristic at hand (and remember, getting each of this output costs you
money, which you have to pay to the webservice with the heuristic).

Ok. Although it's a bit tricky to prove this by using an example where
the randomness is already in the problem from the start. If one groups
very chaotic processes in the same category as random processes of
course.


Well, economically interesting prediction problems do tend to deal with
systems that are rich and deep and complex enough to qualify as chaotic,
if not random -- from weather to the price of oil, etc etc. But
problems of optimization under constraint, such as the NC saw one,
hardly qualify as such, it seems to me -- no randomness nor necessarily
any chaotic qualities in the problem, just utter computational
unfeasibility of algorithmic solutions and the attendand need to look
for "decently good" heuristics instead.
Deliberately giving predictions worse than I could have given, in this
context, seems a deliberate self-sabotage without any return.


Not always, for example with a gradient in user status according to how
much they pay. Note that I don't agree at all with such practice, but
I'm trying to explain how money is made now instead of thinking about
how it should be made.


Money is made in many ways, essentially by creating (perceived) buyer
advantage and capturing some part of it -- but market segmentation is
just one of many ways. IF your predictions are ENORMOUSLY better than
those the competition can make, then offering for free "slightly
damaged" predictions, that are still better than the competition's
despite the damage, MIGHT be a way to market your wares -- under a lot
of other assumptions, e.g., that there is actual demand for the best
predictions you can make, the ones you get paid for, so that your free
service doesn't undermine your for-pay one. It just seems unlikely that
all of these preconditions would be satisfied at the same time; better
to limit your "free" predictions along other axes, such as duration or
location, which doesn't require your predictions' accuracy advantage to
be ENORMOUS _and_ gives you a lot of control on "usefulness" of what
you're supplying for free -- damaging the quality by randomization just
seems to be unlikely to be the optimal strategy here, even if you had
determined (or were willing to bet the firm that) marked segmentation is
really the way to go here.

Analogy: say you make the best jams in the world and want to attract
customers by showing them that's the case via free samples. Your
randomization strategy seems analogous to: damage your jam's free
samples by adding tiny quantities of ingredients that degrade their
flavor -- if your degraded samples are still much better than the
competitors' jam, and there's effective demand for really "perfect" jam,
this strategy MIGHT work... but it seems a very, very far-fetched one
indeed. The NORMAL way to offer free samples to enhance, not damage,
the demand for your product, would be to limit the samples along
completely different axes -- damaging your product's quality
deliberately seems just about the LAST think you'd want to do; rather,
you'd offer, say, only tiny amounts for sampling, and already spread on
toast so they need to be tasted right on the spot, enticing the taster
to purchase a jar so they can have the amount of jam they choose at the
time and place of their choosing.

I hope this analogy clarifies why, while I don't think deliberate damage
of result quality can be entirely ruled out, I think it's extremely
unlikely to make any sense compared to ofher market segmentation
tactics, even if you DO grant that it's worth segmenting (free samples
are an extremely ancient and traditional tactic in all kind of food
selling situations, after all, and when well-designed and promoting a
product whose taste is indeed worth a premium price, they have been
repeatedly shown to be potentially quite effective -- so, I'm hoping
there will be no debate that the segmentation might perfectly well be
appropriate for this "analogy" case, whether it is or isn't in the
originally discussed case of selling predictions-via-webservices).
Alex
Nov 22 '05 #145
On 18 Nov 2005 06:56:38 -0800, "Anton Vredegoor" <an*************@gmail.com> wrote:
[...]
Pardon me, but I'm Anton, not Antoon (well maybe I am but lets keep
this distinction in order to avoid mental hash collisions)

D'oh. I'm sorry. Please pardon _me_ ;-/

Regards,
Bengt Richter
Nov 22 '05 #146
On 18 Nov 2005 06:56:38 -0800, "Anton Vredegoor" <an*************@gmail.com> wrote:
[...]
Pardon me, but I'm Anton, not Antoon (well maybe I am but lets keep
this distinction in order to avoid mental hash collisions)

D'oh. I'm sorry. Please pardon _me_ ;-/

Regards,
Bengt Richter
Nov 22 '05 #147
Ben Sizer wrote:
Mike Meyer wrote:
"Ben Sizer" <ky*****@gmail.com> writes:
Decompyle (http://www.crazy-compilers.com/decompyle/ ) claims to be
pretty advanced. I don't know if you can download it any more to test
this claim though.
No, it doesn't claim to be advanced. It claims to be good at what it
does. There's no comparison with other decompilers at all. In
particular, this doesn't give you any idea whether or not similar
products exist for x86 or 68k binaries.


That's irrelevant. We don't require a citable source to prove the
simple fact that x86 binaries do not by default contain symbol names
whereas Python .pyc and .pyo files do contain them. So any
decompilation of (for example) C++ code is going to lose all the
readable qualities, as well as missing any symbolic constants,
enumerations, templated classes and functions, macros, #includes,
inlined functions, typedefs, some distinctions between array indexing
and pointer arithmetic, which inner scope a simple data variable is
declared in, distinctions between functions/member functions declared
as not 'thiscall'/static member functions, const declarations, etc.


If you protection is actually boils down to "if (licensed) ..."
everything you described will just slightly inconvinient an experienced
cracker. I've read a cracker's detailed walkthrough, it took him 26
minutes to crack a program that asks for a serial number. Basically it
looks like this: set breakpoint on event where "OK" button is pressed
after a serial number is entered, set watchpoint on memory where the
serial number is stored, study all places where this memory is read,
find the ultimate "jump if" instruction.
I've dealt with some very powerfull disassemblers and
decompilers, but none of them worked on modern architectures.
You can definitely extract something useful from them, but without
symbol names you're going to have to be working with a good debugger
and a decent knowledge of how to use it if you want to find anything
specific. Whereas Python could give you something pretty obvious such
as:

6 LOAD_FAST 0 (licensed)
9 JUMP_IF_FALSE 9 (to 21)


I can suggest at least two methods to obfuscate python byte code:

1. Apply some function before writing byte code to file, apply reverse
function upon reading.

2. Take opcodes.h and assign new random numbers to opcodes, also take
ceval.c and reorder opcode handlers in the switch statement to make
reverse engeneering even harder.

I believe this will require at least several hours of manual work
before you can use stock python disassembler.


My interest lies in being able to use encrypted data (where 'data' can
also include parts of the code) so that the data can only be read by my
Python program, and specifically by a single instance of that program.
You would be able to make a backup copy (or 20), you could give the
whole lot to someone else, etc etc. I would just like to make it so
that you can't stick the data file on Bittorrent and have the entire
world playing with data that was only purchased once.


This is doable even in python. Basic idea is that you need to spread
your obfuscation code and blend it with algorithm:

1. Generate user identity on your server and insert it inside your
distribution. Spread it all over the code, don't store it in a file,
don't store in one big variable, instead divide the user identity in
four bits part and spread their storage over different places. Note
this actually doesn't have anything to do with python, it's true for
C/C++. If you don't follow this your protection is vulnerable to replay
attack: crackers will just distribute data file + stolen user identity.

2. Generate custom data files for each user, using various parts of
user id as scrambling key for different parts of the data file. For
example: suppose you have data file for a game and you store initial
coordinates of characters as coordinates (0..65535,0..65535) as four
bytes. Normal code to load them from file would like like

x,y = buf[0]+256*buf[1], buf[2]+256*buf[3]

obfuscated would look like

x,y = buf[0]+c*((buf[1]+ t + 7)&c), buf[2]+c*((buf[1]+ t + 7)&c)

where t contains some bits from user id and c==256

I hope it's not very vague description. I think this approach will do
what you want. Don't forget that you will also need to bind you program
to hardware, or users will just distribute your program + data file
together. I hope they won't mind that your program is tied to one
computer :)

Nov 22 '05 #148
Ben Sizer wrote:
Mike Meyer wrote:
"Ben Sizer" <ky*****@gmail.com> writes:
Decompyle (http://www.crazy-compilers.com/decompyle/ ) claims to be
pretty advanced. I don't know if you can download it any more to test
this claim though.
No, it doesn't claim to be advanced. It claims to be good at what it
does. There's no comparison with other decompilers at all. In
particular, this doesn't give you any idea whether or not similar
products exist for x86 or 68k binaries.


That's irrelevant. We don't require a citable source to prove the
simple fact that x86 binaries do not by default contain symbol names
whereas Python .pyc and .pyo files do contain them. So any
decompilation of (for example) C++ code is going to lose all the
readable qualities, as well as missing any symbolic constants,
enumerations, templated classes and functions, macros, #includes,
inlined functions, typedefs, some distinctions between array indexing
and pointer arithmetic, which inner scope a simple data variable is
declared in, distinctions between functions/member functions declared
as not 'thiscall'/static member functions, const declarations, etc.


If you protection is actually boils down to "if (licensed) ..."
everything you described will just slightly inconvinient an experienced
cracker. I've read a cracker's detailed walkthrough, it took him 26
minutes to crack a program that asks for a serial number. Basically it
looks like this: set breakpoint on event where "OK" button is pressed
after a serial number is entered, set watchpoint on memory where the
serial number is stored, study all places where this memory is read,
find the ultimate "jump if" instruction.
I've dealt with some very powerfull disassemblers and
decompilers, but none of them worked on modern architectures.
You can definitely extract something useful from them, but without
symbol names you're going to have to be working with a good debugger
and a decent knowledge of how to use it if you want to find anything
specific. Whereas Python could give you something pretty obvious such
as:

6 LOAD_FAST 0 (licensed)
9 JUMP_IF_FALSE 9 (to 21)


I can suggest at least two methods to obfuscate python byte code:

1. Apply some function before writing byte code to file, apply reverse
function upon reading.

2. Take opcodes.h and assign new random numbers to opcodes, also take
ceval.c and reorder opcode handlers in the switch statement to make
reverse engeneering even harder.

I believe this will require at least several hours of manual work
before you can use stock python disassembler.


My interest lies in being able to use encrypted data (where 'data' can
also include parts of the code) so that the data can only be read by my
Python program, and specifically by a single instance of that program.
You would be able to make a backup copy (or 20), you could give the
whole lot to someone else, etc etc. I would just like to make it so
that you can't stick the data file on Bittorrent and have the entire
world playing with data that was only purchased once.


This is doable even in python. Basic idea is that you need to spread
your obfuscation code and blend it with algorithm:

1. Generate user identity on your server and insert it inside your
distribution. Spread it all over the code, don't store it in a file,
don't store in one big variable, instead divide the user identity in
four bits part and spread their storage over different places. Note
this actually doesn't have anything to do with python, it's true for
C/C++. If you don't follow this your protection is vulnerable to replay
attack: crackers will just distribute data file + stolen user identity.

2. Generate custom data files for each user, using various parts of
user id as scrambling key for different parts of the data file. For
example: suppose you have data file for a game and you store initial
coordinates of characters as coordinates (0..65535,0..65535) as four
bytes. Normal code to load them from file would like like

x,y = buf[0]+256*buf[1], buf[2]+256*buf[3]

obfuscated would look like

x,y = buf[0]+c*((buf[1]+ t + 7)&c), buf[2]+c*((buf[1]+ t + 7)&c)

where t contains some bits from user id and c==256

I hope it's not very vague description. I think this approach will do
what you want. Don't forget that you will also need to bind you program
to hardware, or users will just distribute your program + data file
together. I hope they won't mind that your program is tied to one
computer :)

Nov 22 '05 #149
On Tue, 15 Nov 2005 03:06:31 -0800, Ben Sizer wrote:
My interest lies in being able to use encrypted data (where 'data' can
also include parts of the code) so that the data can only be read by my
Python program, and specifically by a single instance of that program.
You would be able to make a backup copy (or 20), you could give the
whole lot to someone else, etc etc. I would just like to make it so
that you can't stick the data file on Bittorrent and have the entire
world playing with data that was only purchased once.


Well, if and when you find a way to make water not wet and three-sided
squares, then you can turn your mind towards solving the *really* hard
problem: how to make bytes not copyable.
--
Steven.

Nov 22 '05 #150

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

46
6199
by: Jon Perez | last post by:
Can one run a 1.5 .pyc file with the 2.x version interpreters and vice versa? How about running a 2.x .pyc using a 2.y interpreter?
13
2092
by: vincent | last post by:
I made the suggestion "Need built in obfuscation support in C# compiler" to Microsoft. Anyone here agree with me? If yes, please cast your vote on this suggestion to raise its priority.
17
19631
by: seberino | last post by:
How can a proprietary software developer protect their Python code? People often ask me about obfuscating Python bytecode. They don't want people to easily decompile their proprietary Python app....
10
2180
by: John T. | last post by:
Hi all Figure this scenario: - My Company develops an assembly (a controls DLL) - Since an obfuscation software is too expensive, my Company engages a consultant and delegates him the...
0
1252
by: Gabriel Genellina | last post by:
QOTW: "Template engines are amongst the things that seem easy enough to look at the available software and say 'bah, I'll write my own in a day', but are complex enough to keep them growing over...
9
1359
by: Steve Holden | last post by:
Banibrata Dutta wrote: The Python world isn't particularly paranoid about obfuscation. It's quite easy to publish compiled code only (.pyc and/or .pyo files), and that offers enough protection for...
0
7251
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However,...
0
7148
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can...
0
7367
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers,...
0
7430
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven...
1
7089
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows...
0
3230
by: TSSRALBI | last post by:
Hello I'm a network technician in training and I need your help. I am currently learning how to create and manage the different types of VPNs and I have a question about LAN-to-LAN VPNs. The...
0
3217
by: adsilva | last post by:
A Windows Forms form does not have the event Unload, like VB6. What one acts like?
1
790
muto222
by: muto222 | last post by:
How can i add a mobile payment intergratation into php mysql website.
0
451
bsmnconsultancy
by: bsmnconsultancy | last post by:
In today's digital era, a well-designed website is crucial for businesses looking to succeed. Whether you're a small business owner or a large corporation in Toronto, having a strong online presence...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.