Hello:
Please do not get worked out over my crossposting to lang.lisp,
lang.scheme, and lang.functionl.
There has been a discussion on heise.de about the following:
What is the correct answer of the following: -1^2
I had thought it is +1. My Bigloo Scheme gives: (expt -1 2) --> 1
However, the R language tailored to statistical calculations gives the
following: print(-1^2) --> -1
Some have argued on heise.de that -1 is mathematically speaking the
right answer.
Thanks, for any some more insights into a not so easy problem I guess.
F�rster vom Silberwald <··········@hotmail.com> writes:
> Hello:
>
> Please do not get worked out over my crossposting to lang.lisp,
> lang.scheme, and lang.functionl.
>
> There has been a discussion on heise.de about the following:
>
> What is the correct answer of the following: -1^2
>
> I had thought it is +1. My Bigloo Scheme gives: (expt -1 2) --> 1
>
> However, the R language tailored to statistical calculations gives the
> following: print(-1^2) --> -1
>
> Some have argued on heise.de that -1 is mathematically speaking the
> right answer.
>
> Thanks, for any some more insights into a not so easy problem I guess.
In Lisp, it's not a problem at all; such an abiguity is resolved by
all those stupid parentheses that people are always complaining about:
(expt -1 2) => 1
(- (expt 1 2)) => -1
No ambiguities here, except for the question of how to interpret the
string "-1^2", which is the problem of the "other" language to solve
or conventionalize.
--
Duane Rettig ·····@franz.com Franz Inc. http://www.franz.com/
555 12th St., Suite 1450 http://www.555citycenter.com/
Oakland, Ca. 94607 Phone: (510) 452-2000; Fax: (510) 452-0182
On 3 Oct, 19:13, F�rster vom Silberwald <··········@hotmail.com>
wrote:
> Hello:
>
> Please do not get worked out over my crossposting to lang.lisp,
> lang.scheme, and lang.functionl.
>
> There has been a discussion on heise.de about the following:
>
> What is the correct answer of the following: -1^2
>
> I had thought it is +1. My Bigloo Scheme gives: (expt -1 2) --> 1
>
> However, the R language tailored to statistical calculations gives the
> following: print(-1^2) --> -1
>
> Some have argued on heise.de that -1 is mathematically speaking the
> right answer.
>
> Thanks, for any some more insights into a not so easy problem I guess.
In R:
> -1^2
-1
>(-1)^2
1
The lisp expression is equivalent to the second R expression, not the
first.
F�rster vom Silberwald wrote:
> Hello:
>
> Please do not get worked out over my crossposting to lang.lisp,
> lang.scheme, and lang.functionl.
>
> There has been a discussion on heise.de about the following:
>
> What is the correct answer of the following: -1^2
>
> I had thought it is +1. My Bigloo Scheme gives: (expt -1 2) --> 1
>
> However, the R language tailored to statistical calculations gives the
> following: print(-1^2) --> -1
>
> Some have argued on heise.de that -1 is mathematically speaking the
> right answer.
>
> Thanks, for any some more insights into a not so easy problem I guess.
You probably mean: what is the meaning of this notation: -1^2
Is it -(1^2) or (-1)^2?
All languages I use interpret it as -(1^2) which is what a mathematician
would expect. (Think -x^2.) But I can imagine that there exist
languages which treat the minus sign as part of the number, and not as a
negation operator.
--
Szabolcs
On Oct 3, 7:27 pm, Szabolcs Horv�t <········@gmail.com> wrote:
> F�rster vom Silberwald wrote:
>
> You probably mean: what is the meaning of this notation: -1^2
>
> Is it -(1^2) or (-1)^2?
>
> All languages I use interpret it as -(1^2) which is what a mathematician
> would expect. (Think -x^2.) But I can imagine that there exist
> languages which treat the minus sign as part of the number, and not as a
> negation operator.
After 300 meassges on heise.de I gave up on reading because it turned
out there are two camps: -1 and +1.
I also thought +1 would be the right answer and the sign belongs to
the number.
They were arguing the opposite then for a=1: 0 - a^2 = -1
I would have been interested how functional languages (though it is
not necessarily related to functional programming) treat such a
problem.
What gives Haskell, OCaml?
F�rster vom Silberwald wrote:
> After 300 meassges on heise.de I gave up on reading because it turned
> out there are two camps: -1 and +1.
>
> I also thought +1 would be the right answer and the sign belongs to
> the number.
There is no right answer, only conventions. Different languages use
different conventions. If you're writing mathematics, then it's better
to use parenthesis to avoid confusion.
> They were arguing the opposite then for a=1: 0 - a^2 = -1
>
> I would have been interested how functional languages (though it is
> not necessarily related to functional programming) treat such a
> problem.
>
> What gives Haskell, OCaml?
It seems that in OCaml the negation operator is more tightly binding
than the power.
-.1.**2. gives 1.
--
Szabolcs
F�rster vom Silberwald wrote:
> What gives Haskell, OCaml?
Haskell:
Prelude> -1^2
-1
Mathematica returns the same: http://www.mathe-online.at/Mathematica/
--
Frank Buss, ··@frank-buss.de
http://www.frank-buss.de, http://www.it4-systems.de
From: Joachim Durchholz
Subject: Re: The right answer of -1^2 is?
Date:
Message-ID: <fe0qm1$nel$1@online.de>
F�rster vom Silberwald schrieb:
> There has been a discussion on heise.de about the following:
>
> What is the correct answer of the following: -1^2
Mathematically, since ^ binds more tightly than */, which in turn binds
more tightly than +-, it's -(1^2).
Regards,
Jo
Joachim Durchholz wrote:
> F�rster vom Silberwald schrieb:
>
>> There has been a discussion on heise.de about the following:
>>
>> What is the correct answer of the following: -1^2
>
>
> Mathematically, since ^ binds more tightly than */, which in turn binds
> more tightly than +-, it's -(1^2).
Thats a - of a different color.
kt
--
http://www.theoryyalgebra.com/
"We are what we pretend to be." -Kurt Vonnegut
Joachim Durchholz <··@durchholz.org> writes:
>Mathematically, since ^ binds more tightly than */, which in turn binds
>more tightly than +-, it's -(1^2).
Sure is a good thing, isn't it, that in God's programming language we don't
have infix operators!
From: Klaus Schilling
Subject: Re: The right answer of -1^2 is?
Date:
Message-ID: <87ve9n5m6e.fsf@web.de>
··@cs.berkeley.edu (Brian Harvey) writes:
>
> Sure is a good thing, isn't it, that in God's programming language we don't
> have infix operators!
exactly. this underlines the absolute superiority of S-Expression languages
Klaus Schilling
> ··@cs.berkeley.edu (Brian Harvey) writes:
>> Sure is a good thing, isn't it, that in God's programming language we don't
>> have infix operators!
>
> exactly. this underlines the absolute superiority of S-Expression languages
No, it underlines the importance of clearly specified syntactic rules.
Every serious programming language has these, which makes the equivalent
of -1^2 unambiguous in any such language.
The only questions in this case have to do with the interpretation of
"standard" mathematical notation, and all that would be required to
resolve those questions are some syntactic rules.
Anton
Anton van Straaten <·····@appsolutions.com> wrote:
> > ··@cs.berkeley.edu (Brian Harvey) writes:
> >> Sure is a good thing, isn't it, that in God's programming language we don't
> >> have infix operators!
> >
> > exactly. this underlines the absolute superiority of S-Expression languages
>
> No, it underlines the importance of clearly specified syntactic rules.
C's precedence rules are clearly specified. So are the rules for
Fizzbin (the card game). They're just baroque. Clear specification is
important, but so is simplicity.
> Every serious programming language has these, which makes the equivalent
> of -1^2 unambiguous in any such language.
>
Hence the OP.
> The only questions in this case have to do with the interpretation of
> "standard" mathematical notation, and all that would be required to
> resolve those questions are some syntactic rules.
>
Is it Tuesday? At night? On Beta Antares IV?
Bob Felts wrote:
> C's precedence rules are clearly specified. So are the rules for
> Fizzbin (the card game). They're just baroque. Clear specification is
> important, but so is simplicity.
Sure. In the example in question, though, the applicable rules are
simple enough, even in C. My point is that in this example, the
"absolute superiority" of S-exp languages is not so clear.
In another post, I've just pointed out the baroqueness of Scheme's (and
CL's) way of distinguishing between subtraction and negation, which is
arguably the result of simplicity taken too far.
>> The only questions in this case have to do with the interpretation of
>> "standard" mathematical notation, and all that would be required to
>> resolve those questions are some syntactic rules.
>>
>
> Is it Tuesday? At night? On Beta Antares IV?
Or does the current S-exp have one argument, or many? And should we
make that determination at compile time, or runtime? Perhaps we can't
make that determination at compile time because we don't know for sure
what "-" is bound to (unless we're using R6RS, of course).
Anton
Anton van Straaten <·····@appsolutions.com> writes:
> Bob Felts wrote:
> > C's precedence rules are clearly specified. So are the rules for
> > Fizzbin (the card game). They're just baroque. Clear specification is
> > important, but so is simplicity.
>
> Sure. In the example in question, though, the applicable rules are
> simple enough, even in C. My point is that in this example, the
> "absolute superiority" of S-exp languages is not so clear.
If this were so simple, it wouldn't have generated this entire thread.
It's also a bit disingenuous to extract just the applicable rules and
say that they are simple. I think that for any given problem with
precedence rules, the applicable rules are also simple, once they have
been extracted from the entire system of precedence rules. The problem
is that the entire system that one has to work with when writing and
interpreting expressions is sufficiently complex that it leads to
mistakes or ambiguity, because the system as a whole is not easily
grasped.
> In another post, I've just pointed out the baroqueness of Scheme's (and
> CL's) way of distinguishing between subtraction and negation, which is
> arguably the result of simplicity taken too far.
It hardly seems all that baroque. I think if one were REQUIRED to write
(- 0 1) in place of (- 1), THAT would be baroque. Instead, we get an
example of convenience superseding simplicity (in this case I means
simplicity coming from a uniform interpretation of the "-" function).
But other programming languages, including C, distinguish between unary
and binary minus operators. So having the same distinction in Lisp
doesn't seem out of line with common practice.
--
Thomas A. Russ, USC/Information Sciences Institute
···@sevak.isi.edu (Thomas A. Russ) writes:
>It hardly seems all that baroque. I think if one were REQUIRED to write
>(- 0 1) in place of (- 1), THAT would be baroque. Instead, we get an
>example of convenience superseding simplicity (in this case I means
>simplicity coming from a uniform interpretation of the "-" function).
I don't find (- x) vs. (- x y) confusing because, as people have said, it
follows the same meanings for - that we're all accustomed to in Real Life.
What always drives me crazy is (- x y z); I have to look up every time
whether it folds left or right. I would never use that in my code.
I would never write "x - y - z" in a math context, either; I'd use
parentheses no matter which way I meant it.
In article <············@online.de>,
Joachim Durchholz <··@durchholz.org> wrote:
> F�rster vom Silberwald schrieb:
> > There has been a discussion on heise.de about the following:
> >
> > What is the correct answer of the following: -1^2
>
> Mathematically, since ^ binds more tightly than */, which in turn binds
> more tightly than +-, it's -(1^2).
But is this really a matter of binding? Is the - really an operator, or
is it part of the lexical syntax of a negative number? I.e. is -1 the
negation operator followed by an integer literal, or is it just an
integer literal?
In many languages, a leading sign is part of the syntax of number
literals.
--
Barry Margolin, ······@alum.mit.edu
Arlington, MA
*** PLEASE post questions in newsgroups, not directly to me ***
*** PLEASE don't copy me on replies, I'll read them in the group ***
Barry Margolin wrote:
> In article <············@online.de>,
> Joachim Durchholz <··@durchholz.org> wrote:
>
>> F�rster vom Silberwald schrieb:
>>> There has been a discussion on heise.de about the following:
>>>
>>> What is the correct answer of the following: -1^2
>> Mathematically, since ^ binds more tightly than */, which in turn binds
>> more tightly than +-, it's -(1^2).
>
> But is this really a matter of binding? Is the - really an operator, or
> is it part of the lexical syntax of a negative number? I.e. is -1 the
> negation operator followed by an integer literal, or is it just an
> integer literal?
He's talking about modern mathematical notation. In mathematics,
writing the equivalent of -1^2 (namely, unary negation, a 1, and then a
superscripted 2), means -(1^2), not (-1)^2.
For programming language, depends on how they deal with numeric literals
and their order of operations.
--
Erik Max Francis && ···@alcyone.com && http://www.alcyone.com/max/
San Jose, CA, USA && 37 20 N 121 53 W && AIM, Y!M erikmaxfrancis
Do not seek death. Death will find you.
-- Dag Hammarskjold
From: Joachim Durchholz
Subject: Re: The right answer of -1^2 is?
Date:
Message-ID: <fe2dka$fl5$1@online.de>
Barry Margolin schrieb:
> In article <············@online.de>,
> Joachim Durchholz <··@durchholz.org> wrote:
>
>> F�rster vom Silberwald schrieb:
>>> There has been a discussion on heise.de about the following:
>>>
>>> What is the correct answer of the following: -1^2
>> Mathematically, since ^ binds more tightly than */, which in turn binds
>> more tightly than +-, it's -(1^2).
>
> But is this really a matter of binding?
It is.
> Is the - really an operator, or
> is it part of the lexical syntax of a negative number?
In mathematics, it is an operator. There are no literals for negative
numbers in mathematics, just as there are no literals for irrational
numbers (you need a formula to display them).
> I.e. is -1 the
> negation operator followed by an integer literal, or is it just an
> integer literal?
The former in mathematics.
The latter in many programming languages. That's because otherwise there
would be no way of representing the minimum integer on a
two's-complement machine (i.e. for 16-bit integers, the range is from
-32768 to 32767, but if you interpret -32768 as an operator plus a
literal, you'd have to represent 32768 and that's outside the range).
In a language that does arbitrary-size integer arithmetic, such a
deviation from mathematical conventions is unnecessary. (Ada is an
interesting border case: it does fixed-size integer arithmetic with
integers, but specifies arbitrary-size arithmetic for constant expressions.)
For languages with arbitrary-size integer arithmetic, it becomes a
question of personal judgement whether one should stick with
mathematical or with programming language conventions.
A third option would be to sidestep the issue entirely and not have ^ as
an operator at all. This can be justified by saying that exponentiation
is very rarely used in programming anyway.
Once that's done, all arithmetic expressions will give the same results
whether - is seen as an operator or as a part of the integer literal, so
the language designer can decide whatever is more convenient to him and
won't have to deal with a mob of angry programmers.
Regards,
Jo
On 4 Okt., 12:00, Joachim Durchholz <····@durchholz.org> wrote:
> Barry Margolin schrieb:
>
> > In article <············@online.de>,
> > Joachim Durchholz <····@durchholz.org> wrote:
>
> >> Förster vom Silberwald schrieb:
> >>> There has been a discussion on heise.de about the following:
>
> >>> What is the correct answer of the following: -1^2
> >> Mathematically, since ^ binds more tightly than */, which in turn binds
> >> more tightly than +-, it's -(1^2).
>
> > But is this really a matter of binding?
>
> It is.
>
> > Is the - really an operator, or
>
> > is it part of the lexical syntax of a negative number?
>
> In mathematics, it is an operator. There are no literals for negative
> numbers in mathematics, just as there are no literals for irrational
> numbers (you need a formula to display them).
>
> > I.e. is -1 the
>
> > negation operator followed by an integer literal, or is it just an
> > integer literal?
>
> The former in mathematics.
> The latter in many programming languages. That's because otherwise there
> would be no way of representing the minimum integer
No. How about 0xFFFF or 0177777?
From: Joachim Durchholz
Subject: Re: The right answer of -1^2 is?
Date:
Message-ID: <fe35fi$e07$1@online.de>
Ingo Menger schrieb:
>>> negation operator followed by an integer literal, or is it just an
>>> integer literal?
>> The former in mathematics.
>> The latter in many programming languages. That's because otherwise there
>> would be no way of representing the minimum integer
>
> No. How about 0xFFFF or 0177777?
Nit to pick: 0x8000, not 0xFFFF.
Anyway, 0x8000 would be a way to write that number, though strictly
speaking, it would be the unsigned integer 32768, not the signed integer
-32768. In a language like C that wouldn't matter much, of course.
Regards,
Jo
On 4 Okt., 18:47, Joachim Durchholz <····@durchholz.org> wrote:
> Ingo Menger schrieb:
>
> >>> negation operator followed by an integer literal, or is it just an
> >>> integer literal?
> >> The former in mathematics.
> >> The latter in many programming languages. That's because otherwise there
> >> would be no way of representing the minimum integer
>
> > No. How about 0xFFFF or 0177777?
>
> Nit to pick: 0x8000, not 0xFFFF.
Of course. :)
Joachim Durchholz <··@durchholz.org> writes:
>In mathematics, it is an operator. There are no literals for negative
>numbers in mathematics, just as there are no literals for irrational
>numbers (you need a formula to display them).
Can you support this with a citation? I don't believe it. When you're
solving an equation and end up writing
x = -3
I don't believe that any mathematician sees this as an irreducible
formula for a function application; they see the number negative three.
Similarly, if one is trying to explain signed integer arithmetic by
writing something like
0 - 3 = -3
the intent clearly is to denote a negative number after the equal sign.
This ambiguity in the meaning of unary minus is why the math education
community introduced the raised negation sign, writing
_
0 - 3 = 3
(I hope the ASCII art works in everybody's character set...)
It might be right that /in ambiguous contexts/ the convention is to
interpret - as an operator rather than as part of a number.
Brian Harvey wrote:
> It might be right that /in ambiguous contexts/ the convention is to
> interpret - as an operator rather than as part of a number.
Or even in unambiguous contexts, like -(1^2).
In mathematical notation and in non-S-exp programming languages, there
are typically three syntactically distinguishable meanings for "-": the
subtraction and negation operators, and part of the literal syntax of
negative numbers.
Scheme only has two syntactically distinguishable meanings for "-" at
the lexical level: it's either an identifier or numeric syntax. As an
identifier, the language definition specifies that "-" is the name of a
single operator that performs either subtraction or negation, depending
on how many arguments it receives.
This is actually rather quaint, and is presumably an artifact of the
lack of control over standard bindings in the pre-module-system days. A
smart compiler would want to distinguish the single-argument case at the
S-exp level, and treat it as negation. This would be similarly useful
when using S-exps as a mathematical notation.
Anton
In article <·············@agate.berkeley.edu>,
Brian Harvey <··@cs.berkeley.edu> wrote:
> Can you support this with a citation? I don't believe it. When you're
> solving an equation and end up writing
>
> x = -3
>
> I don't believe that any mathematician sees this as an irreducible
> formula for a function application; they see the number negative three.
Numeric literals have structure, even though we are so used to it that
we often forget about it. The numeral "421" is not an atomic symbol
that we have memorized to stand for a particular number (unless we
have dealt with that number extensively), but rather it is shorthand
for an expression: given a = 9+1, "421" stands for
(4 * aa) + (2 * a) + 1.
Similarly, when given a numeral "-X" (where X is a sequence of
digits), the value it stands for is defined as being the negation of
the value that X stands for.
So when given a negative numeral, the negation operation _is_ applied
at some point when resolving the value of the expression. At most you
can argue that this operation happens "during parsing", at a previous
stage, like in programming languages literals are parsed during lexing.
But it is not at all obvious to me that such concepts are directly
applicable to the way humans read math...
Lauri
Lauri Alanko wrote:
> So when given a negative numeral, the negation operation _is_ applied
> at some point when resolving the value of the expression. At most you
> can argue that this operation happens "during parsing", at a previous
> stage, like in programming languages literals are parsed during lexing.
> But it is not at all obvious to me that such concepts are directly
> applicable to the way humans read math...
But your earlier comment hinted that this does apply to the way humans
read math:
> Numeric literals have structure, even though we are so used to it that
> we often forget about it.
We are "so used to" reading numeric literals as denoting particular
numbers, that we don't treat them as denoting compound expressions.
This seems to me to capture the distinction that Brian was making.
However, there's a complication in the negation case. We can think of
numeric literals as being represented by a sublanguage which has
different rules from those of the formulae they're used in.
When dealing with positive numbers, it's easy to make a strong
distinction between the numeric literal sublanguage and the formula
language: e.g. "421" has a very different meaning from "xyz" even if
x=4, y=2, and z=1; or consider "42z".
However, the distinction becomes fuzzy in the negation case: "-3" could
be considered to be a literal representation of a negative number; or it
could be an expression at the level of the formula language,
representing the negation of the positive integer 3.
Since both ultimately denote the same number, without a precise and
standard definition of the language(s) in question, there's probably not
much we can meaningfully say about that distinction.
Joachim claimed that "there are no literals for negative numbers in
mathematics". If that claim could be supported, it would resolve the
matter. But without such a definition, either interpretation seems
reasonable.
I'll note that there's an argument from simplicity for Joachim's claim:
it means that all literal numbers are positive, and there are only two
meanings for "-", both at the level of the formula language. But from
the perspective of how people read and/or think about math, I don't
think Brian's point can really be denied (for many people, including
many mathematicians).
I wonder whether people like Quine, or Russell & Whitehead, had anything
to say about this... A virtual cookie goes to the first person who can
cite anything on the subject!
Anton
Anton van Straaten wrote:
> I'll note that there's an argument from simplicity for Joachim's claim:
> it means that all literal numbers are positive, and there are only two
> meanings for "-", both at the level of the formula language. But from
> the perspective of how people read and/or think about math, I don't
> think Brian's point can really be denied (for many people, including
> many mathematicians).
Note that in cases where negative literals are raised to powers --
happens all the time in series -- explicit parentheses are used, e.g.,
(-1)^n. So this case is explicitly avoided by disambiguation in the
real world.
--
Erik Max Francis && ···@alcyone.com && http://www.alcyone.com/max/
San Jose, CA, USA && 37 20 N 121 53 W && AIM, Y!M erikmaxfrancis
Hate come gratis / I connected every kind
-- Lamya
In article <·················@newssvr23.news.prodigy.net>,
Anton van Straaten <·····@appsolutions.com> wrote:
> Lauri Alanko wrote:
> > So when given a negative numeral, the negation operation _is_ applied
> > at some point when resolving the value of the expression. At most you
> > can argue that this operation happens "during parsing", at a previous
> > stage, like in programming languages literals are parsed during lexing.
> > But it is not at all obvious to me that such concepts are directly
> > applicable to the way humans read math...
>
> But your earlier comment hinted that this does apply to the way humans
> read math:
>
> > Numeric literals have structure, even though we are so used to it that
> > we often forget about it.
>
> We are "so used to" reading numeric literals as denoting particular
> numbers, that we don't treat them as denoting compound expressions.
> This seems to me to capture the distinction that Brian was making.
>
> However, there's a complication in the negation case. We can think of
> numeric literals as being represented by a sublanguage which has
> different rules from those of the formulae they're used in.
Mathematical notation is full of ambiguity. Their journal articles are
intended to be read by other mathematicians, not stupid computers, so
they expect the readers to resolve the ambiguities through context. And
in cases where it's not so clear, they can fall back on the use of
explicit parentheses, just as is done in conventional programming
languages.
--
Barry Margolin, ······@alum.mit.edu
Arlington, MA
*** PLEASE post questions in newsgroups, not directly to me ***
*** PLEASE don't copy me on replies, I'll read them in the group ***
Barry Margolin wrote:
> In article <·················@newssvr23.news.prodigy.net>,
> Anton van Straaten <·····@appsolutions.com> wrote:
...
>>However, there's a complication in the negation case. We can think of
>>numeric literals as being represented by a sublanguage which has
>>different rules from those of the formulae they're used in.
>
>
> Mathematical notation is full of ambiguity. Their journal articles are
> intended to be read by other mathematicians, not stupid computers, so
> they expect the readers to resolve the ambiguities through context. And
> in cases where it's not so clear, they can fall back on the use of
> explicit parentheses, just as is done in conventional programming
> languages.
In this subthread, the discussion started from the claim "There are no
literals for negative numbers in mathematics". This only indirectly
relates to the question of ambiguity.
As for why we're discussing mathematics in the comp.lang groups, the
infix arithmetic syntax in most languages was explicitly designed to
imitate mathematical notation, to varying degrees, within the
constraints imposed by the text format used to represent most programs.
The attempt to imitate mathematical notation(s) is still driving the
design of some programming languages, as Neel pointed out with the
Fortress example.
Anton
In article <············@online.de>,
Joachim Durchholz <··@durchholz.org> wrote:
> Barry Margolin schrieb:
> > In article <············@online.de>,
> > Joachim Durchholz <··@durchholz.org> wrote:
> >
> >> F�rster vom Silberwald schrieb:
> >>> There has been a discussion on heise.de about the following:
> >>>
> >>> What is the correct answer of the following: -1^2
> >> Mathematically, since ^ binds more tightly than */, which in turn binds
> >> more tightly than +-, it's -(1^2).
> >
> > But is this really a matter of binding?
>
> It is.
>
> > Is the - really an operator, or
> > is it part of the lexical syntax of a negative number?
>
> In mathematics, it is an operator. There are no literals for negative
Why are we talking about mathematics? This isn't sci.math, it's
comp.lang.XXX?
> numbers in mathematics, just as there are no literals for irrational
> numbers (you need a formula to display them).
pi and e aren't literals?
--
Barry Margolin, ······@alum.mit.edu
Arlington, MA
*** PLEASE post questions in newsgroups, not directly to me ***
*** PLEASE don't copy me on replies, I'll read them in the group ***
From: Joachim Durchholz
Subject: Re: The right answer of -1^2 is?
Date:
Message-ID: <fe56ip$ndj$1@online.de>
Barry Margolin schrieb:
> In article <············@online.de>,
> Joachim Durchholz <··@durchholz.org> wrote:
>
>> Barry Margolin schrieb:
>>> In article <············@online.de>,
>>> Joachim Durchholz <··@durchholz.org> wrote:
>>>
>>>> F�rster vom Silberwald schrieb:
>>>>> There has been a discussion on heise.de about the following:
>>>>>
>>>>> What is the correct answer of the following: -1^2
>>>> Mathematically, since ^ binds more tightly than */, which in turn binds
>>>> more tightly than +-, it's -(1^2).
>>> But is this really a matter of binding?
>> It is.
>>
>> > Is the - really an operator, or
>>> is it part of the lexical syntax of a negative number?
>> In mathematics, it is an operator. There are no literals for negative
>
> Why are we talking about mathematics? This isn't sci.math, it's
> comp.lang.XXX?
Mathematical conventions are a good starting point for establishing or
evaluating programming language conventions. After all, mathematical
conventions are known to far more people than programming language
conventions.
>> numbers in mathematics, just as there are no literals for irrational
>> numbers (you need a formula to display them).
>
> pi and e aren't literals?
I wouldn't consider them to be. These are just shorthand names for
infinite expansions. At least that's how I have been looking at them.
However, I start to see that a lot depends on the perspective you're
taking. One could distinguish literals from names by saying that
literals don't have a definition except implicitly inside an axiom - but
then only 0 is a literal, and 4711 isn't even a name, it's just a
shorthand notation for ((4 * 10 + 7) * 10 + 1) * 10 + 1, where 4, 10, 7,
1, * and + are names for other mathematical constructs, reducing
everything to a long series of 0 and the successor operation. That's not
what one would see as a "literal".
I'd say a "literal" is something that's taken for granted, not further
defined; I think that nicely captures the property that something can be
a literal or not depending on the level at which you're arguing. I.e. if
you're doing axiomatic foundations of natural numbers, 4711 is just a
shorthand, if you're doing matrix multiplication, 4711 is a literal.
(Similar, I'd say, for the question whether -4711 is a number or an
expression.)
Oh, and on top of all this, there is no such thing as a "literal" in
most mathematicians' minds, I'd say. In most areas, mathematicians
manipulate symbols and don't care much whether the symbol is a name, a
literal, a shorthand, or whatever - these things are substitutable for
each other anyway. (The only branch where I have seen any kind of
distinction made was formal logic. Though that was more devoted to
semantic issues, not the distinction between kinds of symbols.)
Regards,
Jo
Joachim Durchholz wrote:
> > I.e. is -1 the
>> negation operator followed by an integer literal, or is it just an
>> integer literal?
>
> The former in mathematics.
> The latter in many programming languages.
Whatever, I'd really write "dominant mathematician's notation(s)" instead of
"mathematics" above.
Cheers, BB
--
123 ? The least natura1 that's symbolizing all natura1s
just by itself. Successor : 1234.
[ comp.lang.lisp only; http://www.nhplace.com/kent/PFAQ/cross-posting.html ]
Boris Borcic <·······@gmail.com> writes:
> Joachim Durchholz wrote:
>
> > > I.e. is -1 the
> >> negation operator followed by an integer literal, or is it just an
> >> integer literal?
> > The former in mathematics.
> > The latter in many programming languages.
>
> Whatever, I'd really write "dominant mathematician's notation(s)"
> instead of "mathematics" above.
Perhaps you want Guy Steele's new Fortress language. He also
introduces "significant whitespace" as part of the grammar in order to
navigate that particular tangle. :) Consequently, -1 and - 1 are
probably different in meaning.
Barry Margolin wrote:
>
> But is this really a matter of binding? Is the - really an operator, or
> is it part of the lexical syntax of a negative number? I.e. is -1 the
> negation operator followed by an integer literal, or is it just an
> integer literal?
Aha, now we just have to identify a language s.t.
-1^2 = (-1)^2 = 1
but
-(1)^2 = -((1)^2) = -1
Cheers, BB
"Barry Margolin" <······@alum.mit.edu> wrote
> But is this really a matter of binding? Is the - really an operator, or
> is it part of the lexical syntax of a negative number? I.e. is -1 the
> negation operator followed by an integer literal, or is it just an
> integer literal?
Let (G, *) be a group.
There is an element e in G such that for each a in G, a * e = e * a = a.
For each a in G, there exists an element b in G such that a * b = b * a =
e.
The set of integers Z under addition + is a group.
1 + 0 = 0 + 1 = 1
1 + (-1) = (-1) + 1 = 0.
For all x in Z, there is an element y in Z such that x + y = y + x = 0. y is
the inverse of x.
-1 is the inverse of 1. -1 is an element of Z.
I see -1 as an integer literal.
I see -x as another element of Z, some integer which is the inverse of the
integer x.
On 9 Okt., 10:20, "Marlene Miller" <·············@worldnet.att.net>
wrote:
> "Barry Margolin" <······@alum.mit.edu> wrote
>
> > But is this really a matter of binding? Is the - really an operator, or
> > is it part of the lexical syntax of a negative number? I.e. is -1 the
> > negation operator followed by an integer literal, or is it just an
> > integer literal?
>
[ ... ]
> I see -1 as an integer literal.
> I see -x as another element of Z, some integer which is the inverse of the
> integer x.
>From the point of view of a compiler writer, matters may be different.
First, it's unusual to have arbitrary whitespace in literals. For,
surely, once we have -1 as literal, you'll also want - // some
comment here
1 to be recognized as such. Or perhaps even
-
#include "myconstant.h"
where myconstant.h contains
1
Second, we normally want a language to be orthogonal and systematic.
Thus, when -1 is a literal, then certainly -x should be one, too.
Needless to say that this would make the scanner more complex than it
need be. For we still need unary minus for cases like -(a+b), so we
have to handle that one differently. So we end up handling '-'
differently lexically depending on syntactic context .... something
seems wrong with that, doesn't it?
The right way to do it is, IMHO, to handle '-' just like any other
unary operator and let the type checker decide whther -1, -x or -1^2
make sense.
Ingo Menger wrote:
> Second, we normally want a language to be orthogonal and systematic.
> Thus, when -1 is a literal, then certainly -x should be one, too.
By that reasoning, the reader (or parser, in the case of C, ...) would
have to parse 12345 as a symbol just like abcde. And probably
punctuation aswell, making it rather pointless.
On 9 Okt., 19:42, Matthias Buelow <····@incubus.de> wrote:
> Ingo Menger wrote:
> > Second, we normally want a language to be orthogonal and systematic.
> > Thus, when -1 is a literal, then certainly -x should be one, too.
>
> By that reasoning, the reader (or parser, in the case of C, ...) would
> have to parse 12345 as a symbol just like abcde. And probably
> punctuation aswell, making it rather pointless.
You misunderstood me, it seems.
My point was: given that oftentimes code with constants like 1234 is
refactored into code with variables, why then should
-1234^2
have a different syntactic structure than
-myconst^2
And this would be the case when - was a part of the literal.
On 3 Ott, 20:13, F�rster vom Silberwald <··········@hotmail.com>
wrote:
> What is the correct answer of the following: -1^2
All ambiguity could be avoided by forcing whitespace to separate
things (as in Lisp):
bad: -1^2 = ??
good(1): -1 ^ 2 = 1
good(2): - 1 ^ 2 = -1
-JO
F�rster vom Silberwald wrote:
> What is the correct answer of the following: -1^2
>
> I had thought it is +1. My Bigloo Scheme gives: (expt -1 2) --> 1
>
> However, the R language tailored to statistical calculations gives the
> following: print(-1^2) --> -1
>
> Some have argued on heise.de that -1 is mathematically speaking the
> right answer.
>
> Thanks, for any some more insights into a not so easy problem I guess.
Surely the answer depends on the rules of a language?
Programming languages != mathematical notation.
As an example, in C you will get -3.
On 4 Okt., 20:51, Bakul Shah <······@bitblocks.com> wrote:
> Förster vom Silberwald wrote:
> > What is the correct answer of the following: -1^2
>
> > I had thought it is +1. My Bigloo Scheme gives: (expt -1 2) --> 1
>
> > However, the R language tailored to statistical calculations gives the
> > following: print(-1^2) --> -1
>
> > Some have argued on heise.de that -1 is mathematically speaking the
> > right answer.
>
> > Thanks, for any some more insights into a not so easy problem I guess.
>
> Surely the answer depends on the rules of a language?
> Programming languages != mathematical notation.
>
> As an example, in C you will get -3.
And in perl:
$ perl -e "print (-1^2);"
4294967293
As Barry Margolin correctly pointed out, there is no such thing as an
unambiguous mathematical notation. Mathematicians can resolve
ambuguities. Compilers can not, or at least should not.
Thus, the right answer of -1^2 could be for instance to "shut down
nuclear reactor 2" (2) if (^) reactor 1 is already down (-1).
Ingo Menger wrote:
> $ perl -e "print (-1^2);"
> 4294967293
I think we have a winner.
From: Joachim Durchholz
Subject: Re: The right answer of -1^2 is?
Date:
Message-ID: <fe6dgp$7hi$1@online.de>
Matthias Buelow schrieb:
> Ingo Menger wrote:
>
>> $ perl -e "print (-1^2);"
>> 4294967293
>
> I think we have a winner.
Not so fast.
Quoting from http://perldoc.perl.org/perlop.html :
> Binary "**" is the exponentiation operator. It binds even more tightly
> than unary minus, so -2**4 is -(2**4), not (-2)**4.
> [...]
> Binary "^" returns its operands XORed together bit by bit.
$ perl -e "print (-1^2);"
-1
Regards,
Jo
Joachim Durchholz <··@durchholz.org> writes:
> Matthias Buelow schrieb:
> > Ingo Menger wrote:
> >
> >> $ perl -e "print (-1^2);"
> >> 4294967293
> > I think we have a winner.
>
> Not so fast.
>
> Quoting from http://perldoc.perl.org/perlop.html :
> > Binary "**" is the exponentiation operator. It binds even more tightly
> > than unary minus, so -2**4 is -(2**4), not (-2)**4.
> > [...]
> > Binary "^" returns its operands XORed together bit by bit.
>
> $ perl -e "print (-1^2);"
> -1
Well, XORing shouldn't return -1. The bits from the "2" will have some
effect.
When I tried that code on Linux and MacOS X, I get the same results as
Ingo Mengler.
--
Thomas A. Russ, USC/Information Sciences Institute
On 8 Okt., 22:17, ····@sevak.isi.edu (Thomas A. Russ) wrote:
> Joachim Durchholz <····@durchholz.org> writes:
> > Matthias Buelow schrieb:
> > > Ingo Menger wrote:
>
> > >> $ perl -e "print (-1^2);"
> > >> 4294967293
> > > I think we have a winner.
>
> > Not so fast.
>
> > Quoting fromhttp://perldoc.perl.org/perlop.html:
> > > Binary "**" is the exponentiation operator. It binds even more tightly
> > > than unary minus, so -2**4 is -(2**4), not (-2)**4.
> > > [...]
> > > Binary "^" returns its operands XORed together bit by bit.
>
> > $ perl -e "print (-1^2);"
> > -1
>
> Well, XORing shouldn't return -1. The bits from the "2" will have some
> effect.
Perhaps Joachim has a custom version of perl? :)
From: Cesar Rabak
Subject: Re: The right answer of -1^2 is?
Date:
Message-ID: <fe6rho$63p$1@aioe.org>
On Oct 5, 7:26 pm, Cesar Rabak <·······@yahoo.com.br>
> > And in perl:
>
> > $ perl -e "print (-1^2);"
> > 4294967293
>
> Nope:
>
> $ perl -e "print (-1**2)"
> -1
You can't be serious.
-t
(I mean, that's the rule for the thread.)
On 6 Ott, 05:55, Tom Lord <····@emf.net> wrote:
> On Oct 5, 7:26 pm, Cesar Rabak <·······@yahoo.com.br>
>
> > > And in perl:
>
> > > $ perl -e "print (-1^2);"
> > > 4294967293
>
> > Nope:
>
> > $ perl -e "print (-1**2)"
> > -1
>
> You can't be serious.
>
> -t
>
> (I mean, that's the rule for the thread.)
And what about
perl -e "print (-1)**2;"
-1
Is that normal (I don't know much of perl)?
On Sat, 06 Oct 2007 15:36:03 -0700, andrea <········@gmail.com> wrote:
> On 6 Ott, 05:55, Tom Lord <····@emf.net> wrote:
[...]
>>>> (I mean, that's the rule for the thread.)
>
> And what about
> perl -e "print (-1)**2;"
> -1
>
> Is that normal (I don't know much of perl)?
Yes, print (-1)**2; gets parsed as (print (-1))**2;
If you have warnings enabled perl warns you about this.
perl -e 'print ((-1)**2)'
1
From: Cesar Rabak
Subject: Re: The right answer of -1^2 is?
Date:
Message-ID: <fe9h8k$3ir$1@aioe.org>
David Formosa (aka ? the Platypus) escreveu:
> On Sat, 06 Oct 2007 15:36:03 -0700, andrea <········@gmail.com> wrote:
>> On 6 Ott, 05:55, Tom Lord <····@emf.net> wrote:
>
> [...]
>
>>>>> (I mean, that's the rule for the thread.)
>> And what about
>> perl -e "print (-1)**2;"
>> -1
>>
>> Is that normal (I don't know much of perl)?
>
> Yes, print (-1)**2; gets parsed as (print (-1))**2;
> If you have warnings enabled perl warns you about this.
>
> perl -e 'print ((-1)**2)'
> 1
Yep! Who said parenthesis are important in Lisp ;-)
--
Cesar Rabak
P.S. Following Tom Lord's advice about being serious!
On Fri, 05 Oct 2007 02:46:36 -0700, Ingo Menger wrote:
> And in perl:
>
> $ perl -e "print (-1^2);"
> 4294967293
That's funny, I get: 18446744073709551613
David Trudgett
From: Joachim Durchholz
Subject: Re: The right answer of -1^2 is?
Date:
Message-ID: <feap5u$85h$1@online.de>
David Trudgett schrieb:
> On Fri, 05 Oct 2007 02:46:36 -0700, Ingo Menger wrote:
>
>> And in perl:
>>
>> $ perl -e "print (-1^2);"
>> 4294967293
>
> That's funny, I get: 18446744073709551613
You probably have a 64-bit machine.
^ is the bitwise XOR operator, so if you have a one's-complement CPU,
-1^2 is "all bits set except the second-least significant one".
(I'm a bit puzzled that this gives positive answers. Since the most
significant bit is set, the result should be negative - unless Perl
interprets the results of bitwise manipulation as an unsigned integer.)
Regards,
Jo
From: David Trudgett
Subject: Re: The right answer of -1^2 is?
Date:
Message-ID: <usl4mwgm3.fsf@yahoo.com>
Joachim Durchholz <··@durchholz.org> writes:
> David Trudgett schrieb:
>> On Fri, 05 Oct 2007 02:46:36 -0700, Ingo Menger wrote:
>>
>>> And in perl:
>>>
>>> $ perl -e "print (-1^2);"
>>> 4294967293
>>
>> That's funny, I get: 18446744073709551613
>
> You probably have a 64-bit machine.
Yeah, sorry, I should have put a winky face or something after that.
Cheers,
David
--
These are not the droids you are looking for. Move along.
F�rster vom Silberwald wrote:
> What is the correct answer of the following: -1^2
There's no question that to a mathematician this means -(1^2). The problem
is that it's hard to write down a language grammar which parses this
correctly while also parsing x^-1 as x^(-1) (instead of a syntax error). If
you give unary negation low precedence then you can't parse x^-1, if you
give it high precedence then you misparse -1^2, and if you give it both
precedences then you get a shift/reduce conflict.
Languages like C solve the problem by not having an infix exponentiation
operator. They still parse -x*y and -x/y as (-x)*y and (-x)/y instead of
-(x*y) and -(x/y), which may be why the C standard mandates that integer
division truncate toward zero. There are still cases where this difference
has an effect -- e.g. -INT_MIN/2 != -(INT_MIN/2) on a two's-complement
machine. Haskell solves the problem by refusing to parse x^-y. The Right Way
to do it is to give unary minus low precedence but also allow it immediately
after any operator of higher precedence, e.g.:
add ::= neg_mul | add '+' neg_mul | add '-' neg_mul
neg_mul ::= mul | '-' neg_mul
mul ::= expt | mul '*' neg_expt | mul '/' neg_expt
neg_expt ::= expt | '-' neg_expt
expt ::= unary | unary '^' neg_expt
unary ::= '!' neg_unary | '~' neg_unary | atom
neg_unary ::= unary | '-' neg_unary
But this doubles the number of productions and the details are rather subtle
(I'm not sure I got it right). A lot of language designers take the easy way
out, or don't even notice the problem, which is why there are scads of
languages in the wild that misparse -1^2.
This isn't the only weird thing about operators in programming languages. In
typewritten mathematics 1/xy generally means 1/(xy), but in practically all
programming languages with infix operators, 1/x*y means (1/x)*y. And x/y/z
is often meaningful in source code, but meaningless as mathematical
notation. Languages where juxtaposition is multiplication are even worse; in
Mathematica, "1/2x" actually means "x/2". I think it should be a syntax
error. Mathematica does seem to get negation right, though.
-- Ben
Ben Rudiak-Gould wrote:
> There's no question that to a mathematician this means -(1^2). The
> problem is that it's hard to write down a language grammar which parses
> this correctly while also parsing x^-1 as x^(-1) (instead of a syntax
> error). If you give unary negation low precedence then you can't parse
> x^-1, if you give it high precedence then you misparse -1^2, and if you
> give it both precedences then you get a shift/reduce conflict.
In mathematical notation, you'd usually write x^-1 with the -1
superscripted, which is unambiguous. The problem arises when trying to
express that while omitting the explicit grouping introduced by the
superscripting.
> Haskell solves the problem by refusing to parse x^-y.
This makes some sense if you're emulating mathematical notation. After
all, if you wanted to write x^(y+z) you wouldn't usually expect to be
able to write it as x^y+z. So a simple and justifiable rule here is to
require that the term after the ^ operator either be a syntactically
simple one, or else be parenthesized.
> The Right Way to do it is to give unary minus low precedence
> but also allow it immediately after any operator of higher precedence
I don't think there's a real Right Way here other than reintroducing a
mechanism for explicitly signaling the grouping. Maybe parentheses? ;)
Anton
In article <<···················@nlpi068.nbdc.sbc.com>>,
Anton van Straaten <·····@appsolutions.com> wrote:
>
>> The Right Way to do it is to give unary minus low precedence
>> but also allow it immediately after any operator of higher precedence
>
> I don't think there's a real Right Way here other than reintroducing a
> mechanism for explicitly signaling the grouping. Maybe parentheses? ;)
Nah. Use whitespace, like Fortress. Then you can distinguish between
x / y * z
and
x / y * z
That'll *never* cause problems, right? :)
(To be fair, I like Fortress. However, it did have some, er, unique
design decisions.)
--
Neel R. Krishnaswami
·····@cs.cmu.edu
Anton van Straaten wrote:
> After
> all, if you wanted to write x^(y+z) you wouldn't usually expect to be
> able to write it as x^y+z. So a simple and justifiable rule here is to
> require that the term after the ^ operator either be a syntactically
> simple one, or else be parenthesized.
The main argument in favor of allowing x^-y is that it has only one possible
parse, no matter what the parser (human or machine) believes the language's
precedence rules are, so allowing it is quite harmless.
... Although you do have to know that ^ is infix, and not, say, a Pascal
pointer dereference. And I was wrong when I said that Haskell will fail to
parse x^-1; it'll parse it just fine, but probably fail at the binding phase
unless you've defined (^-).
-- Ben
Ben Rudiak-Gould <·············@cam.ac.uk> [Hi, Ben!] writes:
> and if you give it both
>precedences then you get a shift/reduce conflict.
Are you arguing that mathematicians have an LALR parser built in? :-)
I was taught, in C, to ignore the table of 23 levels of precedence, because
you'll always get it wrong and have hard-to-find bugs, and just always fully
parenthesize everything. Rather ironic, isn't it?
On Oct 5, 12:21 am, ····@cs.berkeley.edu (Brian Harvey) wrote:
> I was taught, in C, to ignore the table of 23 levels of precedence, because
> you'll always get it wrong and have hard-to-find bugs, and just always fully
> parenthesize everything. Rather ironic, isn't it?
Oddly enough then that I only rarely commit operator-precedence errors
in C even though I have a thing about using minimal parens :) But I'll
freely admit that I never bothered to memorize the rules either - K&R
actually got the table mostly right. The main place where I find that
the 'intuitive' approach breaks down is with all the pointer-
dereferencing operations and there I *do* fully parenthesize my code.
In fact, given the richness of C's pointer manipulation primtives, I'm
not entirely sure that there *is* any intuitive precedence for them,
but with the more vanilla mathematical & logical operators, everything
seems to work just fine.
Of course, the built-in operators are such a small fraction of my code
that it pretty much degenerate to a fully parenthesized prefix
notation anyway :)
david rush
David Rush wrote:
> The main place where I find that
> the 'intuitive' approach breaks down is with all the pointer-
> dereferencing operations and there I *do* fully parenthesize my code.
But those are all unary. For unary operators it's simple: postfix operators
bind tightest and prefix operators bind next-tightest. Within each level
there can never be any ambiguity. Array access [expr] and struct access
.ident and ->ident and function call (expr,expr) are all postfix operators.
Pointer dereference * is prefix.
In fact, this is another seriously screwed-up thing about C's design: the
fact that *, alone among the access operators, is prefix. If it were postfix
like everything else, the type syntax would be much easier to understand: a
pointer to an array of ten functions taking two ints and returning pointers
to arrays of five characters would be declared as
char a*[10](int,int)*[5];
and used as
int i = a*[w](x,y)*[z];
both of which can simply be read left to right as in Pascal, except for the
final result type. Instead we get
int (*(*a)[10](int,int))[5];
and
int i = (*(*a)[w](x,y))[z];
You also wouldn't need the silly -> operator, since it could be replaced by
*. as in Pascal. Of course postfix * is ambiguous, but * never made much
sense for this operation anyway. You could, in a moment of sanity, use
function notation for the bitwise operators, which frees up ^. Or you could
use ->:
char a->[10](int,int)->[5];
That looks pretty good, actually. So why didn't they do that? Seriously,
does anyone know? I hope it wasn't just so that you could write *++p without
parentheses.
> but with the more vanilla mathematical & logical operators, everything
> seems to work just fine.
Consider yourself lucky. Be glad you're not this poor guy:
http://groups.google.com/group/comp.theory/msg/517b7316318d10fd?dmode=source
Can you spot the bug? Or rather, one of the two bugs, the other being that
he thinks that 2^30 == 1073741824. Which is true. But more to the point,
(2^30) == 1073741824 isn't.
By the way, this is an interesting bit of history for those who haven't seen it:
http://www.lysator.liu.se/c/dmr-on-or.html
Also interesting, to me, is that dmr writes in a way that suggests that he
doesn't realize that operators don't need to have a precedence level.
-- Ben
Brian Harvey wrote:
Hi, Brian! Long time no see.
> I was taught, in C, to ignore the table of 23 levels of precedence, because
> you'll always get it wrong and have hard-to-find bugs, and just always fully
> parenthesize everything. Rather ironic, isn't it?
What I find frustrating is that this problem is so easy to fix. The problem
is not that C's precedence levels are wrong, but that the idea of precedence
levels is wrong. The precedence rules in mathematics and in programmers'
heads form a partial order, if even that. So why does every programming
language I've ever seen define a total order on (equivalence classes of) its
binary operators? Here are the precedence rules for all the binary operators
in C:
, || && | ^ & == < << + *
, L < < < < < < < < < <
|| > L < < < < < < < < <
&& > > L < < < < < < < <
| > > > L < < < < < < <
^ > > > > L < < < < < <
& > > > > > L < < < < <
== > > > > > > L < < < <
< > > > > > > > L < < <
<< > > > > > > > > L < <
+ > > > > > > > > > L <
* > > > > > > > > > > L
Here's roughly the subset of those rules that I use when reading code:
, = || && | ^ & == < << + *
, A < < < < < < < < < < <
= > R < < < < < < < < < <
|| > > A < . . . < < . . .
&& > > > A . . . < < . . .
| > > . . A . . . . . . .
^ > > . . . A . . . . . .
& > > . . . . A . . . . .
== > > > > . . . . . . < <
< > > > > . . . . . . < <
<< > > . . . . . . . L . .
+ > > . . . . . > > . L <
* > > . . . . . > > . > L
If I encounter anything outside that subset in C code, I may have trouble
understanding it and I'll definitely suspect that it has a bug. This rarely
happens because most C programmers either haven't learned the other rules or
know better than to confuse their readers with them. Dropping those "."
rules is a safe change to the language. It can only turn confusing code into
erroneous code, and the workaround is to add the parentheses that should
have been there in the first place. So I think this 30-year-old problem
could still be fixed, though with the way the standards process goes it
would take another 20 years. All you have to do is deprecate and then remove
the precedence rules that no one in their right mind would ever actually
use. If there's a downside to this, I'm not seeing it.
Okay, so maybe people knew less about language design back then. But what
about Java and C#? Both wanted a syntax familiar to C programmers, but it
would have been so easy to keep only the rules that people use and drop the
crazy bug-prone ones. Why didn't they? I seriously think that the idea never
occurred to either design team. I think they considered their options to be
keeping C's order (bad) or using a different order (worse). I don't know
where the idea of precedence levels came from -- certainly not mathematics
-- but it seems to be firmly engrained in language designers' heads. I can't
recall hearing anyone even mention this issue before, except for me in a
couple of previous screeds.
And what about Haskell? Apparently everyone is crazy except me.
-- Ben
From: Joachim Durchholz
Subject: Re: The right answer of -1^2 is?
Date:
Message-ID: <fei2p6$qrv$1@online.de>
Ben Rudiak-Gould schrieb:
> [...] keep only the rules that people use
> and drop the crazy bug-prone ones. Why didn't they? I seriously think
> that the idea never occurred to either design team. I think they
> considered their options to be keeping C's order (bad) or using a
> different order (worse).
I'm not sure that a different order is really worse.
You'd want to use different operator symbols to keep people from
instinctively applying the wrong precedence order, of course.
> I don't know where the idea of precedence
> levels came from -- certainly not mathematics --
I think precedence levels are just a formalization of how formulae have
been written in mathematics.
Take a look at formal logic.
You get to see stuff like
a = b & b = c => a = c
Well, they indicate precedence levels by using a larger "implies" arrow.
A rough ASCII art approximation would be
\
a = b & b = c ===> a = c
/
which also makes the formula a whole lot clearer.
> but it seems to be
> firmly engrained in language designers' heads.
It can save a lot of parentheses.
Actually a sane and complete precedence hierarchy isn't that difficult.
Group the operators like this (lowest precedence to highest):
Assignment
Boolean
Comparison
Integer and bitwise arithmetic
Access (pointer dereference, field access etc.)
Apply the usual rules inside the groups ('and' binds tighter than 'or',
'*' tighter than '+' etc.).
You don't need a precedence for operators that are prefix-only or
postfix-only.
Oh, and you can't handle negative numbers as literals. (Which was, I
think, a desirable thing to do on the machines that C was designed on
and for.)
> I can't recall hearing
> anyone even mention this issue before, except for me in a couple of
> previous screeds.
Declaring non-precedence isn't that's uncommon; I have seen that
mentioned in many texts on operator-precedence parsing.
One might also want to make operators like / and - nonassociative to
force programmers to declare what they mean when writing stuff like
a / b / c
(Nonassociativity is just an error entry in the precedence table on the
main diagonal, nonprecedence is an error entry elsewhere in the
precedence table.)
The main problem with partial precedence is that you need to store the
precedence table. Take a few dozen of operators, and you get a table
that's several hundred entries, all of which must be considered and
potentially debugged. (At the time these techniques were explored,
several hundred bytes of memory were something that one didn't want to
waste.)
Total precedence is easier to handle and easier to debug. I guess that's
why is was explored more thoroughly.
Oh, and if you allow programmers to define their own precedences, you
start to worry about precedence table size. And handling partial
precedence gets really icky (you need graph algorithms, which are far
less well-known than the sort algorithms that you need for a total order).
Of course, none of these problems are really difficult to solve, but it
takes time and effort which is often spend on more interesting issues.
> And what about Haskell?
User-definable precedence levels, I guess.
> Apparently everyone is crazy except me.
Count me in, too ;-)
Regards,
Jo
On 10 Oct, 09:33, Joachim Durchholz <····@durchholz.org> wrote:
> Take a look at formal logic.
> You get to see stuff like
>
> a = b & b = c => a = c
>
> Well, they indicate precedence levels by using a larger "implies" arrow.
> A rough ASCII art approximation would be
>
> \
> a = b & b = c ===> a = c
> /
>
> which also makes the formula a whole lot clearer.
Often, and I'm not saying this is necessarily the case in the examples
your thinking of, this is a matter of implicit typing.
Typically, there are two different types of arrow.One which allows you
to assert what is implied as a statement of the formal language, and
the second allows you to reason about properties of your language.
If we use -> for the second case, that's your big arrow, and => for
the first then:
a=b& b=c -> a=c has only one possible parse as,
a=b& (b=c -> a=c) wouldn't make sense; it is trying to join a
statement of your formal language with one made about your formal
language.
So (a=b& b=c) -> a=c is the only possible parse.
There is also something similar going on at the lower levels, where
you can't take the & of an atom, but only of a predicate response.
On the other hand,
a=b& b=c => a=c has two reasonable parses as a=b&(b=c => a=c) and
(a=b& b=c)=> a=c are both statements in your formal language and its
more common to see brackets in these situations.
Joachim Durchholz wrote:
> I think precedence levels are just a formalization of how formulae have
> been written in mathematics.
>
> Take a look at formal logic.
> You get to see stuff like
>
> a = b & b = c => a = c
I'm in favor of operator precedence rules. I do think they make formulas
clearer when they're well chosen. (I'm not sure I'm in favor of infix
operators in the first place, but if you're going to have them, there should
be some precedence rules.)
> It can save a lot of parentheses.
I only want to drop the rules that people generally don't use because
they're confusing or nonsensical. The cases where you'd have to add
parentheses are cases where most programmers would expect you to add them
anyway.
You must agree that C has /some/ rules that should be dropped. For example,
the standard requires that (a < b < c) parse as ((a < b) < c). The only
people who are ever going to trigger this rule are bright-eyed bushy-tailed
beginners who are foolish enough to expect (a < b < c) to have its
mathematical meaning. Those people are going to get bitten badly. Beginners
are hurt by this rule, no one else benefits from it, and it's absurdly easy
to fix. All you have to do is change
relational-expression:
shift-expression
relational-expression < shift-expression
relational-expression > shift-expression
relational-expression <= shift-expression
relational-expression >= shift-expression
to
relational-expression:
shift-expression
shift-expression < shift-expression
shift-expression > shift-expression
shift-expression <= shift-expression
shift-expression >= shift-expression
in section 6.5.8 of WG14 N1124. It even shortens the standard (by 20
characters). I don't understand why this change never happened. I don't
understand why K&R chose this behavior in the first place. The gods must be
crazy.
> Actually a sane and complete precedence hierarchy isn't that difficult.
> Group the operators like this (lowest precedence to highest):
>
> Assignment
> Boolean
> Comparison
> Integer and bitwise arithmetic
> Access (pointer dereference, field access etc.)
>
> Apply the usual rules inside the groups ('and' binds tighter than 'or',
> '*' tighter than '+' etc.).
Yes, I agree, as long as the usual rules are those familiar from
mathematics. That's roughly what my table was intended to be, except that
where C's precedence is backwards from what you'd expect I made it an error
rather than silently change the meaning of old code.
> Declaring non-precedence isn't that's uncommon; I have seen that
> mentioned in many texts on operator-precedence parsing.
If you're talking about non-associativity at a given precedence level, I've
seen that. I've never seen a programming language that didn't have a
traditional operator-precedence table with highest and lowest levels and
left- or right- or non-associativity at each level.
> One might also want to make operators like / and - nonassociative to
> force programmers to declare what they mean when writing stuff like
>
> a / b / c
Yes, (a / b / c) should be an error and (a / b * c) probably should mean (a
/ (b * c)), but I think it's too late to fix that. I'm trying to concentrate
on things that would be easy to fix but remain unaccountably unfixed.
It is interesting that the rules taught in grade-school arithmetic (PEMDAS)
are so much at odds with the rules used in real (typewritten) mathematics.
> The main problem with partial precedence is that you need to store the
> precedence table. Take a few dozen of operators, and you get a table
> that's several hundred entries, all of which must be considered and
> potentially debugged.
I can't see this being a problem for languages with a fixed set of
operators. One easy approach is to arrange the operators in a plane such that
a < b <=> a_x < b_x && a_y < b_y
I don't know if you can represent every partial order this way, but it would
work here and it's simple and efficient to implement.
> And handling partial
> precedence gets really icky (you need graph algorithms, which are far
> less well-known than the sort algorithms that you need for a total order).
For languages that let you define your own rules, all you need is DFS or
BFS. There's no need to compute the transitive closure beforehand because
most programs are not going to rely on very many precedence rules. If it is
necessary to speed up the search, you can just cache previously found paths
(or add edges to the graph).
-- Ben
From: Joachim Durchholz
Subject: Re: The right answer of -1^2 is?
Date:
Message-ID: <fem7fs$om5$1@online.de>
(Follow-up set to comp.lang.functional, because I don't think this lost
any relationship to Lisp when we started discussing operator precedence.)
Ben Rudiak-Gould schrieb:
>
> I only want to drop the rules that people generally don't use because
> they're confusing or nonsensical.
I'm fully in line with that.
A well-designed hierarchy should have no nonsensical precedences, and I
believe it is possible to have total precedence ordering without sowing
confusion (except for C programmer).
> You must agree that C has /some/ rules that should be dropped.
Definitely.
> For example, the standard requires that (a < b < c) parse as ((a < b)
> < c). [...] Beginners are hurt by this rule, no one else benefits
> from it, and it's absurdly easy to fix. [...] I don't understand why
> this change never happened. I don't understand why K&R chose this
> behavior in the first place. The gods must be crazy.
Fully agreed.
>> Actually a sane and complete precedence hierarchy isn't that
>> difficult. Group the operators like this (lowest precedence to highest):
>>
>> Assignment
>> Boolean
>> Comparison
>> Integer and bitwise arithmetic
>> Access (pointer dereference, field access etc.)
>>
>> Apply the usual rules inside the groups ('and' binds tighter than
>> 'or', '*' tighter than '+' etc.).
>
> Yes, I agree, as long as the usual rules are those familiar from
> mathematics. That's roughly what my table was intended to be, except
> that where C's precedence is backwards from what you'd expect I made it
> an error rather than silently change the meaning of old code.
Well, I won't waste time or effort on trying to fix C. That language is
broken in so many ways, some of them very fundamental, that I'd rather
spend it in inventing my own (which is probably just as fruitless, but
far more fun).
> (a / b * c) probably should mean (a / (b * c)),
That would be inconsistent with a - b + c, which is usually parsed as
(a - b) + c
and not
a - (b + c)
I'm not sure what exactly is going on there.
>> The main problem with partial precedence is that you need to store the
>> precedence table. Take a few dozen of operators, and you get a table
>> that's several hundred entries, all of which must be considered and
>> potentially debugged.
>
> I can't see this being a problem for languages with a fixed set of
> operators. One easy approach is to arrange the operators in a plane such
> that
>
> a < b <=> a_x < b_x && a_y < b_y
>
> I don't know if you can represent every partial order this way,
If I understand you correctly, this operation is called "flattening a
partial order".
The problem is that you lose the information which operators are
comparable and which aren't. I.e. you lose the "error" entries in the
precedence table.
> but it
> would work here and it's simple and efficient to implement.
Correct.
>> And handling partial precedence gets really icky (you need graph
>> algorithms, which are far less well-known than the sort algorithms
>> that you need for a total order).
>
> For languages that let you define your own rules, all you need is DFS or
> BFS.
You'd want to be efficient, i.e. O(N log N) at worst. That goes beyond
the simple breadth/depth search algorithms.
I'm not even sure that there are libraries available for this kind of
stuff. Well, probably there are, but I'd have to research which of them
are useful and which aren't.
Sorting for a total order is simpler. In anger I could even whip up a
quicksort or mergesort myself.
> There's no need to compute the transitive closure beforehand
> because most programs are not going to rely on very many precedence
> rules. If it is necessary to speed up the search, you can just cache
> previously found paths (or add edges to the graph).
Just to be hit by the first code generator that defines a few thousand
priorities for some insane reason.
That kind of reasoning is good for the first version of a compiler, but
it's something I'd want to get rid of rather early in the development cycle.
Regards,
Jo