numeric

                             The Power of Numbers

1. The 12 basic properties of numbers:

P1) (Associative law for addition) (a + b) + c = a + (b + c)

P2) (Existence of an additive identity)  a + 0 = 0 + a = a

P3) (Existence of additive inverses)  a + (-a) = (-a) + a = 0

P4) (Commutative law for addition)   a + b = b + a

P5) (Associative law for multiplication)  a*(b*c) = (a*b)*c

P6) (Existence of a multiplicative identity) a*1=1*a=a    (1<>0)

P7) (Existence of multiplicative inverses)  a*a^(-1) = a^(-1)*a = 1   (a<>0)

P8) (Commutative law for multiplication)  a*b=b*a

P9) (Distributive law)   a*(b + c)= a*b + a*c

P10) (Trichotomy law)  For every number a, one and only one situation holds:

        a) a =0

        b) a is in the collection of P

        c) a is in the collection of -P

{From this law, we have the definition of all inequality:

        a) if a-b is in P, then we say a>b

        b) if b>a, then a<b  (Please note here, you now understand P4&P8 is not universal.)

        c) if a>b or a=b, then a>=b

        d) if a<b or a=b, then a<=b            }

P11) (Closure under addition) if a and b are in P, then a+b is still in P.

P12) (Closure under multiplication) if a and b are in P, then a*b is still in P.      

 

(The following craps are most from precious textbook "Calculus" and I repeat them here for my own practising. Only a small part of them are my own ones which are also hinted from book.)

Claim1: a*0 = 0

Proof:

       a = a*1                         P6

       a = a*(1+0)                     P2

       a = a*1 + a*0                   P9

       a = a + a*0                     P6

plus -a at both side of equation a= a + a*0, then  (I think here we are using a rather profound axiom A1:

    if a=b, then a+c = b+c)

       a + (-a) = a + a*0 + (-a)      A1 ????

       0 = a + (-a) + a*0             P4 && P3

       0 = 0 + a*0                    P3

       a*0 = 0                        P2

Claim2: -a = -1*a

Proof:

    1 + (-1) = 0                     P3

    a*(1+ (-1)) = 0*a                multiply a at both side (A1   ??????)

    a*1 + a*(-1) = 0                 Claim1

    a +  (-1)*a  = 0                 P6 & P8

    -a + a + (-1)*a = 0+ (-a)        plus -a at both side

    0+   (-1)*a = -a                 P3

    (-1)*a =-a                       P2

Claim3: (-a)*b=-(a*b)

Proof:

        a+ (-a) = 0                P3

        b*(a+(-a)) = b*0           multiply b at both side (A1 ????)

        b*a + b*(-a) = 0            P9&Claim1

        a*b + (-a)*b = 0            P8(This is not necessary if we start b+(-b) = 0, but...)

        -(a*b)+a*b + (-a)*b = 0+(-(a*b))  plus -(a*b) at both side (A1 ???)

        0+ (-a)*b = -(a*b)        P2 & P3

        (-a)*b = -(a*b)            P2

Claim4: (-a)*(-b) = a*b

Proof:

        a+(-a) = 0                P3

        (-b)*(a+(-a)) = 0*(-b)    Multiply (-b) at both side

        (-b)*a + (-b)*(-a) = 0    P9 & claim1

        -(b*a) + (-b)*(-a) = 0    claim3

        (b*a) + (-(b*a)) + (-b)*(-a) = 0+(b*a)  plus b*a at both side

        0+ (-b)*(-a) = b*a        P2&P3

        (-a)*(-b) = a*b            P8

Claim5: if a*b = 0 then either a=0 or b=0

Proof:

        if a<>0 then there exists a^(-1) such that a*a^(-1)=1

        So, multiply a^(-1) at both side then

        ((a^(-1)*a) *b = a^(-1) * 0         P5

        1*b = 0                    P7 & Claim1

        b = 0                        P6

        Similarly we can prove that if b<>0 then a=0.

        There are only four cases for all possible value of a and b:

        a) a<>0  , b =0

        b) a=0  ,  b<>0

        c) a=0, b=0

        d) a<>0, b<>0

        We proved case a) & b) and eliminated case d) at the same time. Case c) is what we desire which need not

    and cannot be eliminated. (Sounds weird? I am also feel puzzled about my English.)

Claim6: -(a-b) = b-a

Proof:

        -(a-b) = -1*(a-b)             claim2

        -(a-b) = -1*a + (-1)*(-b)       P9

        -(a-b)= -a + 1*b                claim2 & claim4

        -(a-b) = b-a                    P6 & P4

Claim7: -(-a) = a

Proof:

        -(-a)= -1*( -1*a)            claim2

        -(-a) = (-1*(-1))*a            P5

        -(-a) = 1*a                  claim4

        -(-a) = a                    P6

Claim8: if a>b, then a+c>b+c

Proof:

        a>b ==> a-b>0            definition a) (see above)

        ==> a-b +0 = a-b >0                    P2

        ==> a-b + c -c =a-b >0                 P3

        ==> a + c +(-b)-c = a-b >0             P4

        ==> (a+c) - (c-(-b)) =a-b>0            claim6 by replacing b with -b

        ==> (a+c) - (b+c) = a-b>0              claim7 & P4

        ==> a+c > b+c                          definition a)

(Please note that we proved for "inequality" the axiom A1 of "equality" still holds: Plus a same number at both

side of inequality, the inequality still holds. Does that suggest that this is a rather general axiom for many

operations?)

Claim9: if a<b and b<c, then a<c

Proof:

        a<b ==> a-b<0  and b<c ==> b-c<0        By definition a)

        a-c = a-c+0 = a-c + b-b = a-b+b-c         P2 & P3 & P4

    By P11, since a-b and b-c are all in -P collection, a-c = (a-b)+(b-c) also belongs to -P collection

        So, a-c<0

(Please note here, we proved the transitive property of inequality. Actually the transitive property of "equality" can also proved by using "axiom A1" along with those 12 basic laws. See below.)

Claim10: if a=b and b=c then a=c

Proof:

        a=b ==>  a+(-b) = b+(-b)         using axiom A1 by plus -b at both side

        a+(-b) = 0                       P3

        similarly we get: b=c ==> b+(-c) =0

        a + (-c) = a+(-c) +0 = a+(-c) + b+(-b) = a+(-b)+b+(-c)   P2&P3&P4

        a+(-c) = 0+0=0                   by result of above and P2

        a+(-c)+c=0+c                    by A1 to plus c at both end

        a+0=c                            P3

        a=c                               P2

Claim11:  -(a+b)=-a-b

Proof:  (This is a trivial one but it is handy.)

        -(a+b)=-1*(a+b) = -1*a+(-1)*b = -a+(-b)=-a-b       claim2&P9&definition????

(Here I am puzzled a bit about "-"(minus) and "-"(negative). In vc++, they are completely different two

operators at all. But in mathematics, do we need to define a+(-b) as a-b, or should we prove it? If later, then

how to prove it? My instinct is to define the "minus" as an operation derived from "+"(plus) operation. What do

you think?)

       

Claim12: if a<b and c<d, then a+c<b+d

Proof:

        a<b ==> a-b<0  and c<d ==> c-d<0   by definition a)

        a-b+c-d<0                          P11

        a-b+c-d=a+c - (b+d)<0              P4 & claim11

        a+c<b+d                            by definition a)

(Here I am again puzzled a bit about inference such as "if a<0 and a=b, then b<0". Do we have this kind of

transitive property of "<"? Does this suggest this is a more general axiom which "algebra" is all based on?)

(I again wonder if I would begin to suspect our most basic foundation of belief. And I begin to feel

difficult to choose which of my two feet to go first when I want to walk...)

Claim13: 1+1<>0

Proof: (Actually in textbook, it is claimed that this claim cannot be proved with P1--P9. Because in Problem25,

it shows that even we assume 1+1=0, then all properties of P1--P9 still hold. )

    My approach is doomed to be a failure. However, this is my try.

    Assume 1+1=0, then 1=-1 by plus -1 at both sides. This will destroy the uniqueness of "0". See below:

    0=1+1=1+1+0 = 1+1+  1+(-1)              P2&P3

    0= 1+1+1+1                              since 1=-1

    similarly 0=1+1+1+1+1+1=...=2*k  where k=1,2,...   

    And I think zero is unique in all 9 properties above, so this will destroy the uniqueness of 0 which may

imply a kind of contradiction.

(I am not sure about this. Maybe we should add a "unique" word in P2 and P3 such that: There exists a unique "0" such that a+0=a; There exists a unique -a for each a that a+(-a)=0. Then my proof might work. What do you think?)

(By the way, can we prove the uniqueness of "zero" with only 9 properties alone?)

Claim14: (a^(-1))^(-1) = a  where a<>0

Proof:  (Let's denote a^(-1) as f(a), so, f(f(a)) means the 1/(1/a)  )

        1=1 ==>    a*f(a) = 1                                   P7

        ==>    a*f(a) *f(f(a)) = 1*f(f(a))            multiply f(f(a) at both sides

        ==>  a*1 = f(f(a))                                    P6 & P7

        ==> a = f(f(a))   <==> (a^(-1))^(-1) = a

         

            Here it comes...

hi Nick,
 
Actually I'm not sure what your question is. Is it claim13:1+1<>0 or sth. else? As you said, claim13 cannot be proved with P1-P9, i guess that means it can be proved with P11 very easily( 1>0 & 1>0 ==> 1+1>0 (by P11) ==>1+1<>0 ). To be honest,i think these proves are not very interesting,actually very boring. :)  Have a nice day.
 
Yong

            Here it goes...

You are very accurate with this!!
However, there is one thing I cannot agree. They are not so boring. :) You see, what I want to discuss is about the "axioms". Maybe you have noticed that apart from 12 laws, there is an implicit axiom: if a=b then a+c=b+c. Or in other words, the transitivity of "equality":  if a=b, then by transitivity a+c can be replaced as b+c and the "equality" still holds. This seems trivial, but you see, not all operators support transitivity. And "equality sign" seems to me to be just another common operator, like in c++.
 
This trivial problem cannot attract Mr. ZhuChunMing's slightest interest because he thinks these are college stuffs. However, I am taking "linear algebra" at the same time along with "mathematic analysis I". The idea is much more profound than matrix operations which I learned in University as "engineering mathematics".  The "vector space" holds the similar "associativity, cummutativity, distribution, closure of addition etc."  And all these properties apply to a large family of mathematics, apart from numbers. So, I think these rules are more general principles in universe. And it happens that we learn them in field of natural numbers. And consequently take it for granted that they are built-in property of numbers.
 
Sometimes, I just wish I have the chance to study more about mathematics.
 
Nick Huang/Qingzhe Huang

I proved a trivial lemma...

Claim: Suppose V is a vector space over scalar K and U={u1,u2,...,un} is a set of vectors of linear independent. Then all proper subset of U cannot span V.

Proof: Let's prove by contradiction.

Assume set of vectors of W is a proper subset of U and W spans V. Then the difference of U and W, U-W, is not empty. Choose any non-zero scalars for "U-W" to form a linear combination of a vector v. Since W spans V, we can get -v to be linear combination of W. And v+(-v)=0 which means that the summation of some linear combination of "U-W" and some linear combination of W is zero vector. And union of "U-W" and W is actually U. So, in other words, we can find a linear combination of U to be zero vector and since the scalar of "U-W" are non-zero, this means U is linear dependent which is contradict to our given condition. Hence, we proved the claim.

                   

Claim: For a certain basis B of vector space V, every vector v in V has an unique linear combination of B.

Proof:

By definition of basis of vector space, we know there exists a linear combination of basis B for every vector v.

Now let us prove if there exists two or more linear combinations of B for v, they must be same.

Let's prove by contradiction and

Assume B={u1,u2,u3....un} and

    v=a1*u1+a2*u2+...+an*un                   (1)

    v=b1*u1+b2*u2+...+bn*un                   (2) 

and a1,a2...,an are not all same, that is, at least one of a1-b1, a2-b2,...an-bn is not zero.

(2)*(-1) + (1):

    v+(-v) = 0 = (a1-b1)*u1+(a2-b2)*u2 +....+(an-bn)*un

By assumption that not all the coefficients are zero would imply that {u1,u2...un} have a non-zero linear

combination to be zero, or linearly dependent, which is contradictive to the given. So, assumption is false.

 

     questions...  

Dear professor,
 
I tried to read the textbook and found some questions as following, can you be kind enough to look into it. Thank you.
 
1)  On Page142,4.16 the proof said that if S is empty set of vectors, then span(S)={0} by definition. But I check the definition of spanning set on page120 and I cannot conclude this claim from definition. Can you justify this?
And a silly question: If span(empty set) = {0}, then empty set itself forms a subspace because by thereom4.5 span(empty set) is a subspace containing empty set.  Then empty set must satisfy the property of closing in addition and scalar multiplication. What is the result of empty set +empty set?
 
2) On page121 of definition of subspace, it only requires the vectors in the subspace to satisfy the property of closed in  vector addition and scalar multiplication. But the theorem 4.2 on page122 says that the conditions for W to be a subspace requires i) The zero vector belongs to W ii) The close of vector addition and scalar multiplication must be satisfied.
 
My argument is that condition i) is not necessary to prove W is a subspace. Condition ii) is sufficient by definition of subspace. Since if scalar multiplication is closed in W, then for any vector u in W, 0xu=0 must also belong to W which means that zero vector belongs to subspace is simply a direct result from its property of closed in scalar multiplication.
 
 
3) I feel puzzled about the proof on page144 4.21
What I translate the proof is like following:
To prove f(t), g(t),h(t) are linearly independent is to prove:
forAll a,b,c( forAll t( axf(t) + bxg(t)+cxh(t)=0  --> a=b=c=0))
 
To prove it is false, we can negate the above statement:
not (forAll a,b,c( forAll t( axf(t) + bxg(t)+cxh(t)=0  --> a=b=c=0)))
<==> exist a,b,c( exist t( axf(t) + bxg(t)+cxh(t)=0  --> not(a=b=c=0)))
 
So, proving above to be false, we can prove the original claim to be true.
But the proof on book seems to me like proving:
 
exist a,b,c( exist t(axf(t) + bxg(t)+cxh(t)=0  -->(a=b=c=0))
 
which is not a sufficient proof for linearly independent. Can you justify this for me?
 
Thank you for your time,
 
B.rgds,
 
Nick Huang/Qingzhe Huang

    Are we proving they are linearly independent or not?

Hi professor,

I have a lab this afternoon, so I didn't come to your office. Regarding the
point 3, I tried to find a counter example for you to help me justify the
problem. (also see example in page127 example 4.10c)
I define two function vectors:
f(t)= sin(t Pi/4)
g(t)= cos(t Pi/4)
where Pi stands for the Pi and t in R.

Then I try to follow the method in textbook to prove these two function
vectors are linearly independent.
That is for all t
    a f(t) + b g(t) = 0  ==> a=b=0

Let's choose t=0, then we get b=0
Let's choose t=4, then we get a=0
So, like what is done in textbook, I claimed these two vectors are linearly
independent as a=b=0 for all t.

But choose t=1, then we get a=1, b=-1.
So, are we proving they are indeed linearly independent or not?

Thank you for your time,


Nick Huang/Qingzhe Huang
   

    So, I am wrong...

Hi professor,

Thank you for your message.
I think it over and realize that I am wrong. The linear combination requires
the coefficients to be constant. Then by proving that there is no consistent
solutions, we can be sure that it is impossible to find a linear combination
of vectors such that the result is zero vector. Am I right?
Thank you and I will try to get to Loyola earlier.
B.rgds,

Nick Huang/Qingzhe Huang
 

    Yes, You are wrong...

The linear combination requires
> the cofficients to be constant.


Exactly.
 

 

        Guess what I look like...

Question: Set A is a set of real numbers with such property:

    1) If a in A, then for all x<a , x in A.

    2) A is not empty set.

    3) A is not equal to R  which is all real numbers.

    4) If a in A, then exist some x in A such that a<x.

Now tell me what is A?

Answer: A={x: x<c where c is a real number}

    i) Claim: A has upper bound but has no lower bound.

proof:

    Since A is not equal R, there exists some number x not in A.

    Then for all a in A, a<x. Otherwise if exists a in A and a>=x, then

    by 1) x must be in A which is contradiction to assumption. So, x is an upper bound of A.

    ii) Claim: A has no lower bound.

proof:

    It is simple because if we assume that b is a lower bound of A, then for all a in A a>=x.

    And considering x-1 which is smaller than x, therefore x-1<x<=a for all a in A. By 1) we know

    x-1 in A. However, x>x-1 which is contradiction to assumption that x is a lower bound of A.

    So, A has no lower bound.

    iii) Claim: A has no maximum number.

proof:

    Assume m is max of A, then by 4) there exists some a in A such that a>m. But this is contradiction

    to our assumption that m is max of A. So, A has no max number.

    iv) Claim: A has a least upper bound. This is direct result from Property 13 which states that if a

    non-empty finite set of real number has upper bounds then it has a least upper bound.

Now, we got the picture of A: {x: x<c where c is its least upper bound of Real}

            A simple example

    The teacher gives an example such that an infinite sequence of nested open intervals may not have an element

such that it falls in all these interval.

    i.e. I0=(a0, b0), I1=(a1,b1),...In=(an,bn),... where an<=a(n+1)<=b(n+1)<=bn    

    (Please note that a(n+1), b(n+1) means the a, b with subscript of n+1.)

The example is In=(1, 1/n) which continually shrinking and whenever you claim that you find a number of z in

all In's for some n, then we can easily find some m such that 1/m <z and the interval of (1,1/m) doesn't contain

number z.

Claim: ||x|-|y||<=|x-y|   

Proof:   You are allowed to use |x|+|y|>=|x+y|

    want to show -|x-y| <= |x|-|y| <=|x-y|   

    a)  |x-y| + |y| >= |x-y+y|=|x|    ==>   |x| - |y| <= |x-y|

    b)  |y-x| + |x| >= |y-x+x|=|y|    ==>   |x| -|y| >=  -|x-y|

by a) b)  -|x-y| <= |x|-|y| <=|x-y|  ==>  ||x|-|y||<=|x-y|   

 

Claim: Graph can be represented as linear combination of vertex

Idea: Assume G=(V,E) where V={A,B,C,D} and E={AB, BD, CA, AC}

let's design a basis for G:

v1= 1 0 0 0            v2= 0 0 0 0

    0 0 0 0                0 1 0 0 

    0 0 0 0                0 0 0 0

    0 0 0 0                0 0 0 0

 

            An alternative view of similarity

B=P~AP  where P~ represents inverse of P.

How do we get this?

Claim A is a matrix representation of linear transformation with respect to basis of S={u1,u2,u3...}

Now we want to calculate this A with respect to a new basis S'={v1,v2,v3...}

1. We need to calculate A(v1), A(v2), A(v3)... Please note here is the vector, not the coordinate matrix.

[v1]S' which represents the coordinate of v1 with respect to S' is [1,0,0,...]

[v2]S' which represents the coordinate of v2 with respect to S' is [0,1,0,...]

...

2. The change of basis matrix from S' to S is P of which column is consists of coordinate of v1,v2,v3... with

respect to basis of S.

i.e.

v1= a1*u1+b1*u2+c1*u3...

v2= a2*u1+b2*u2+c2*u3...

...

P= a1 a2  ...

   b1 b2  ...

   c1 c2  ...

   ...

Therefore the coordinate of v1,v2,v3... with respect to basis S, [v1]S, [v2]S, [v3]S... is P*u1, P*u2, P*u3...

3. Calculate [A(v1)]S, [A(v2)]S, [A(v3)]S... which is the coordinate of linear transformation vector mapped by

A with respect to basis S:

[A(v1)]S = A*[v1]S = A*P*[u1]S'=A*P*[1,0,0,...]

[A(v2)]S = A*[v2]S = A*P*[u2]S'=A*P*[0,1,0,...]

[A(v3)]S = A*[v3]S = A*P*[u3]S'=A*P*[0,0,1,...]

...

4. Now use change of basis matrix again to transform coordinate back from S to S'

P~*A*P*[1,0,0...], P~*A*P*[0,1,0...], P~*A*P*[0,0,1,...]... which is exactly the coordinate of mapping of v1

with respect to S'. And use these coordinate as column to form a matrix representation of linear transformation

A with new basis S' which is exactly P~*A*P.  (The coordinate [1,0,0,..],[0,1,0...]... gets the column and we

use column to form matrix which is exactly the same matrix.)

5. The conclusion is that B=P~*A*P represents an exactly same linear transformation of which A represents. It is

only change of basis.

6. Question: Do we specify the particular basis S and S' here?

Answer: No, we don't even know what original basis S is. It is like a function that whenever we choose a basis

S, we would have a new basis S'. That gives the reason why we have infinite P because we have infinite basis S

and S' pairs.

 

    Some idea about multiplication of big numbers

When I took comp465, professor introduced the fast algorithms of multiplication of big matrix by Strassen. He is considered to be a genius for design such a clever method because before that nobody can imagine that recursion will improve the speed of multiplication of two matrix.

When I travelled to Guangzhou, I wandered about the hotel and bought a book of algorithm which introduced a similar algorithm for multiplication of two big numbers. It enlightened me a bit since I once spent a whole morning to try to find out the clue how Strassen could have any clue to design his clever way. This algorithms actually gives us some idea. Based on this, I tried the division of "three" to see if there is any improvement. Unfortunately division of two is the best.

Assume two big number A and B with both n digit on base of 10. And we want to find out AxB.

1. Let's define a notation for power of base of b as following: power(m, b) means b to the power of m.

2. Assume n= power(k, 2) where k is integer. (This is to simplify the proof and it can also prove this simplification doesn't affect the algorithm since we can always expand n to closest power of 2.

3. Rewrite A as A = a0 x power(n/2, 10) + a1  where a0 and a1 are integer. In other words, when we write down Arabic number format of A, it is like this "a0a1".

4. Similarly rewrite B = b0 x power(n/2, 10) + b1

5. Then AxB = (a0 x power(n/2, 10) + a1) x (b0 x power(n/2, 10) + b1)

  = a0 x b0 x power(n, 10) + (a0xb1+a1xb0)xpower(n/2, 10) + a1xb1

Rewrite the middle item: a0xb1+a1xb0 = (a0+a1)x(b0+b1) - a0xb0 - a1xb1

6. Rewrite AxB = c0 x power(n, 10) + c1 x power(n/2, 10) + c2

where c0= a0xb0; c1= (a0+a1)x(b0+b1) - c0 -c2; c2= a1xb1

7. AxB is transformed into three multiplications of half-sized number, namely a0,b0, a1,b1 and a0+a1,b0+b1.

We will ignore the carry-on change of a0+a1 and regard the result still as number with maximum digit of n/2.

8. Recursively apply this method for the three smaller-scaled multiplication.

9. Let's denote M(n) as multiplication of two number with n digits. And there are six addition and quite interestingly they can be speeded up since they occupy disjoint position of final result. Then if we only calculate the complexity of multiplication and we have following recursive formula to calculate:

 M(n)= 3 x M(n/2)   and M(1)=1 particularly

 

Equivalence Class

When I read the book about "abstract algebra", I think all the nine theorem at the top of this page is deduced from the

three basic property of equivalence class: reflexive , symmetric, transitive. Say "=" represents relation of "equivalence",

then

1. reflexive: x=x

2. symmetric: x=y ==> y=x

3. transitive: x=y and y=z ==> x=z

Let's go over the proof of the nine theorem at top of this page.

First the so-called replace principle is actually originated from transitive: if x appears in a equation f and we know x=y

then we can replace x with y at each occurrence of x in f. ???????????????but how to prove this? I cannot explain it to myself.

???????????????

How to prove that for every non-negative number there is a unique representation like following?

let r be a number in [0, 2^n-1], and r= summation();

int summation()

{

int sum=0;

int d[n]; //array d is an array such that d[i]>=0 && d[i]<=i;

(for i=1; i<n; i++)

{   sum+= d[i]*factorial(i);

}

}

In English, this problem can be expressed like this: For any non-negative number r can be represented as  summation

of consecutive factorial number with unique sequence of coefficients.

proof:

(At beginning, I tried to use mathematical inductive proof on both r and n. However, there is no result. After visiting

professor, I understand there is a constructive algorithm to prove and the method also gives a way to construct the

solution.)

Let number r be divided number 2, 3, 4,...n and

void construct(int r, int n, int d[])//the array starts from index 1

{

     int result=r;

    for (int i=2; i<=n; i++)

    {

        d[i-1]=result%i;       

        result /=i;

    }

}

Why it works? Because the division will make the result be 0 finally. Let's prove it will converge by n divisions.

Assume the worst case such that all coefficients are the max, which is i. So, the summation is like this:

r=1*1!+2*2!+3*3!+...(n-1)*(n-1)!

Is r exceeding its upper boundary n!-1? Let's see.

r=(2-1)*1!+(3-1)*2!+(4-1)*3!+(5-1)*4!+...+(n-1)*(n-1)!=2!-1!+3!-2!+4!-3!+5!-4!+...n!-(n-1)!=n!-1!=n!-1<=n!-1

So, we can use this constructive method to represent all number r which is smaller or equal than n!-1.

Is it unique?

Let's prove by contradiction.

Assume there is two distinct representation coefficient array d1[N], d2[N]. That is r=Rep(d1[])=Rep(d2[]) where "Rep" means

factorial representation. Let's subtract the two representation:

(d1[1]-d2[1])*1!+(d1[2]-d2[2])*2!+(d1[3]-d2[3])*3!+...+(d1[n-1]-d2[n-1])*(n-1)!=0

Since the two representations are distinct, among n-1 terms there must at least two term which is not 0. (It is easy to see

one term won't make the equation to become zero.) Let's find this first non-zero term from right to left, or from high to

low. Say the term i is the first non-zero term starting from term n-1 downwards. Without losing generality assume it is

a negative term, (if not, let achieve this by using subtraction the other way) then the sum of all its left-hand-side must

be equal to its absolution. However, let's recall the proof above the maximum of 1*1+2*2!+3*3!+...i*i!=(i+1)!-1 which means

even this term has a coefficient -1, still the sum of all rest term is smaller than this term and the equation is never

balanced. So, this is contradiction and the assumption is impossible.

 

 Did I re-discovered a well-known vector space?

hi professor,

Thank you for your comment on assignment.
i) When I read Knuth notes, I think I re-discovered a wellknown fact that there exists a vector space defined like this:
a) vectors are all binary strings.(could be padded with 0 at left most to make all string equal length)
b) vector operations are xor operations.
c) scalor are also binary strings with same length. And scalor operations with vector are xor operations. (This is the strange thing that I don't understand.)
d) The following is demo of all 9 axioms satisfied by this vector space. Let v1,v2, v3 be vector of binary string, k be scalor which is also binary string. "+" denotes xor operation, 0 be binary string with all 0's.
1) (v1 + v2) + v3 =v1+(v2+v3)  associative law
2) v1+0=0+v1=v1  identity law
3) v1+v1=0   inverse law
4) v1+v2=v2+v1 commutative law
5) k+(v1+v2) = (k+v1)+(k+v2) distribution law for scalor (Here I suspect I am cheating because this seems to be exactly like identity law (2) plus commutative law(5). And should operation of scalor and vector be different?)
6) (k1+k2)+v1=(k1+v1)+(k2+v1) distribution law for scalor.(This also seems cheating because essentially this is same thing of 5)  and my doubts remains.)
7) k+v1=v1 unity law (here k is scalor of 0 and this is not so convincing to myself.)

ii) Inspired by Knuth notes (A1 page 4 formula 11) on property of Gray code, Gray code seems to me like exactly a linear operation. See g(v1+v2) = g(v1)+g(v2)
And here I understand the reason that you use vector space to explain the puzzle of (73,16)(I forget details)

iii) But my doubt remains for the i) about the operation of scalor. Can operation of scalor and vector be same? It seems cheating, right? Can you justify this for me?


By the way, about your comment on assignment1 question 3, you wrote that the returning of a temporal set object is expensive operation. I totally agree to this and this issue has been argued since the first time I study Object-Oriented programming in comp249. However, in view of user it might be a good idea. See below:
a) Usually I will overload operator of set operation. i.e. Set Union maybe overloaded as "+" and Set Intersection becomes "*". Then it will make user happy if he can use like (set1+set2)*set3.
b) However, this usually requires all set object be passed by value as user usually expect that after the operation set1,set2,set3 remains same so that this operation also works fine: (set1+set2)*(set1+set3)
c) So, the question is what is return type for all set operation? The only choice is a temporal object by value because the other choice like dynamic allocated object will increase burden of user which inevitably leads to memory leaking.

These are my arguments for my design of set operations.

Thank you and have a nice weekend,


nick huang/qingzhe huang
  

 

   

 

                               back.gif (341 bytes)       up.gif (335 bytes)         next.gif (337 bytes)