database
               The symbol is beyond what you see...
Here it goes....        
Hi sir,
¡¡
In your slides of topic5 about definition of "equivalence", it says as following:
"Defn: Suppose R is a relation schema, and S and T are
sets of functional dependencies on R.
T and S are equivalent (S ¡Ô T) if every instance r of R that
satisfies all the FD¡¯s in S, also satisfies all the FD¡¯s in T,
and vice versa (S |=T and T |= S)"
¡¡
Here we are talking about equivalence BETWEEN FD's.
¡¡
Then in next slides, it gives all "Armstrong's anxiom" and it defines as following:
"R is a relation schema, and X, Y and Z are subsets of R.
Reflexivity
 If Y ⊆ X, then X ¡Ô Y (trivial FD¡¯s)"
¡¡
Here we talk about subsets of relation schema which I think refers to the ACTUAL records in a table, right? How can we say a subset of a relation schema is "equivalent" when we have no definition for them?
¡¡
THIS IS IMPORTANT:
When I was writing this mail and copying & pasting from your slides, the SEEMINGLY-SAME symbol of "equivalence" in slides are displayed completely differently in "outlook express" mail. For example, in definition of "equivalence", the ".." of "equivalence is displayed as "¡Ô" which is standard equivalence symbol in "discrete mathematics". However, in "Armstrong's anxiom" it becomes
"⊆" which is the "subset" symbol. I think this maybe the source of confusion. Hopefully it is not the case of displaying problem in my computer which means many my classmates suffers the same puzzle. You see, in your slides there are many symbols not displaying correctly and for all those "blank square" symbol, we regarded them to be same. That's the problem.
¡¡
Can you do anything with the "Symbol displaying" problem?
¡¡
thank you for your time,

¡¡
¡¡
Nick Huang/Qingzhe Huang
¡¡

Here it goes.... 

Hi sir,
¡¡
Regarding the computation of cananical cover in solution of A2, question2-b. In the 2nd step, we need to determine left-redundancy. I don't understand what is the meaning of "B is extraneous in AB->T because F logically implies {F1-(AB->T)U(A->T)}...".
Because what I understand is that if "A->T" can be implied from "F1" by REMOVING "AB->T", then B is redundant, right? Or in other words, if {F1-(AB->T)} |= (A->T) then at least we know B is redundant.
For example, in your case:
F1={A->B, AB->T, R->C, NS->BT};
F2 = {F1-(AB->T)U(A->T)} =  {A->B, R->C, NS->BT}
Obviously F1 <> F2
¡¡
Can you explain for me please?
¡¡
Thank you for your time,
¡¡
¡¡
Nick Huang/Qingzhe Huang

¡¡

Here is what we learned from comp353
1.  Oracle doesn't support "except".
2. You cannot define two "foreign key" constraint within one "alter table" statement.
3. You can "references" even it is not referencing a key. However, if the attribute is only referencing part of foreign key,
you will know you cannot do it with "foreign key" key word.
4. Best strategy: write simple table DDL in one file without even primary key, foreign key. Then write other constraint in 
alter table statement in other files and name everything.
5. Oracle doesn't support "on update cascade".
6. You will find the system table "dual" very useful: select 'anything here' from dual.
7. You can create some exception in triggers. And in order to give a meaningful error message, intercept the exception:
exception
when your_defined_exception_name then
raise_application_error(-20100, 'here I will generate my own error message instead of oracle meaningless error numbers');
8. 
¡¡
		My puzzle in assignment...
Hi Professor,
¡¡
I copied the following question from my assignment. And I cannot explain the solution. Please be kindly justify my deduction.
¡¡
Let's go over the choice 1( For sorted record on x, n=460):
¡¡
1) Basic data:
a) The total number of blocks to store all records is 1,000,000/100=10,000
b) The number of records inquired by query is, when n=460: 460x460=211,600
c) The number of blocks to store all records in query:   460x460/100=2,116
d) The total number of blocks of indexes: 1,000,000/200=50,000
¡¡
¡¡
2.  I calculate the number of I/O of linear search like this:
a) First to find the first records in file, namely, record of x=270. This is a binary search which requires lg10,000<=14 times of search among 10,000 of data blocks.
b) Then we need get all data records one by one until x>730: (460/1000)x10,000=4600
c) So, linear search will need total 4600+14=4614 (I/O)
¡¡
3. Now I calculate the number of I/O for indexed search:
a)  The number of I/O to find all indexes:  we need 1 to find the first index, x=270. And from that block, we need to retrieve all index blocks till x>730. The total index block is (460/1000)x1,000,000/200=23,000
....
¡¡
I don't want to continue because this simply implies that the author doesn't want us to retrieve all indexes. Instead, he implies that we need only retrieve the starting index and then retrieve data records. So, the overhead of retrieving index is only 1 compared with "linear search". I tried to persude myself that this is the correct explanation. But then, I cannot explain choice d because it will take exactly same overhead to retrieve the first data block than n=460.
¡¡
So, do you see my puzzle? Can you explain this for me?
¡¡
¡¡
3. Consider a file with 1,000,000 records, with keys (x,y). We assume that the records are distributed randomly in a space in which both x- and y-coordinates range from 0 to 1000. Suppose that 100 records fit on a block, and an average B-tree leaf has about 200 key-pointer pairs (recall that not all slots of a B-tree blocks are necessarily occupied, at any given time). Assume that there are B-tree indexes of height 3 on both x and y and that the roots of both indexes are kept in main memory. Now, imagine that we are given the range query asking for a square in the middle that is n x n for some n between 1 and 1000. Calculate the number of I/O's needed to answer the query if the records in the file are stored randomly and if the records in the file are sorted on x or y. For which values of n do indexes help, compared to linearly scanning the whole file. ¡¡
¡¡  a)  File sorted on x, n = 460
¡¡  b)  Unsorted file, n = 110
¡¡  c)  Unsorted file, n = 100
¡¡  d)  File sorted on y, n = 600
¡¡
¡¡ Answer submitted:  a)

¡¡
¡¡
¡¡
You have answered the question correctly.
¡¡
¡¡
¡¡
¡¡
¡¡
¡¡ ¡¡

The following is another puzzle for me. You see the "gamma" operation is "gamma_{a, count(b)}". I don't understand what kind of sql statement it will be: select a, count(b) from sometable group by a ????
The reason I asked is that in group by we don't care about the value of b. We will only store value a and an integer to record the number of aggregated data. The field b is not necessary. But according to my calculation, the author requires you to include both a,b and an integer for each aggregate tuple which is meaningless.
If we interpret sql as "select a,b, count(b) from sometable group by a, b", it is still meaningless to state "count(b)", right?
¡¡
Can you explain this?
¡¡
¡¡
4. In this question, assume blocks can hold 1000 bytes of data, and there are 201 main-memory buffers available to perform an operation on relation R(a,b). Tuples of R require 16 bytes for a and 36 bytes for b, and no other space. Also, assume that integers require 4 bytes, no matter how large or small they are. If r is the number of tuples of relation R, what are the upper limits on r for executing the following operations in one pass and in two passes?

¡¡

  1. DELTA (duplicate elimination).
  2. GAMMA_{a,COUNT(b)}.
  3. GAMMA_{b,COUNT(a)}.

Indicate which of the following is closest to the truth. (In measuring closeness, minimize percentage error, not the absolute difference between the choice and the exact answer.)

¡¡
¡¡  a)  A two-pass GAMMA_{b,COUNT(a)} requires r <= 630,000.
¡¡  b)  A one-pass DELTA requires r <= 3800.
¡¡  c)  A two-pass DELTA requires r <= 550,000.
¡¡  d)  A two pass GAMMA_{a,COUNT(b)} requires r <= 4,300,000.
¡¡
¡¡ Answer submitted:  b)

¡¡
¡¡
¡¡ You have answered the question correctly.
¡¡
¡¡
Nick Huang/Qingzhe Huang
¡¡
ÎÒµÄÕ÷³¾ÊÇÐdz½´óº£¡£¡£¡£(Chinese)
http://www.staroceans.com/
¡¡
The dirt and dust from my pilgrimage become oceans of stars...(English)
http://www.staroceans.com/english.htm
¡¡
		something I learned from comp451
		(The following is purely for personal reminder)
useful commands in Linux:

There is a built-in super user named root. and default password is also 
root.
super user: su
type exit to "exit" supermode
mount floppy: mount /mnt/floppy
check disk: df
remove directory which is non-empty: rm -r -f

How to mount cdrom?

Answer: [oracle@fridge oracle]$ mount /mnt/cdrom



How to unmount cdrom?

Answer: [oracle@fridge oracle]$ umount /mnt/cdrom



How to mount USB Keydriver?

Answer: [oracle@fridge oracle]$ mount /mnt/usb



userful commands in Oracle sql:

1. simple
set timi on
spool statistics.txt
select systimestamp from dual;

--the following is a little statistic number of repeated number of table 
lineitem.
--And I try to find out how many different numbers they have
select count(*) as repeated, countnumber from
(select l_orderkey as orderkey, count(*) as countnumber from lineitem group 
by l_orderkey)
group by countnumber;
spool off

--the following is the running result which displays the numbers ... (how to 
express them in English? I am talking gebbish language. )
SQL> @statistics

SYSTIMESTAMP
---------------------------------------------------------------------------
26-NOV-04 11.16.10.697533 AM -05:00

Elapsed: 00:00:00.09

  REPEATED COUNTNUMBER
---------- -----------
    214216           1
    214588           2
    214338           3
    213688           4
    214220           5
    214372           6
    214578           7

7 rows selected.

Elapsed: 00:01:05.55
SQL>



2. create cluster
create cluster ord_line_hash_cluster
(orderkey integer)
hashkeys 2000000
size 1024;

create table d_orders (
  o_orderkey integer not null primary key,
  o_custkey integer not null,
  o_orderstatus char(1) not null,
  o_totalprice decimal(15,2) not null,
  o_orderdate date not null,
  o_orderpriority char(15) not null,
  o_clerk char(15) not null,
  o_shippriority integer not null,
  o_comment varchar(79) not null)
cluster ord_line_hash_cluster(o_orderkey);

create table b_lineitem (
  l_orderkey integer not null,
  l_partkey integer not null,
  l_suppkey integer not null,
  l_linenumber integer not null,
  l_quantity decimal(15,2) not null,
  l_extendedprice decimal (15,2) not null,
  l_discount decimal(15,2) not null,
  l_tax decimal(15,2) not null,
  l_returnflag char(1) not null,
  l_linestatus char(1) not null,
  l_shipdate date not null,
  l_commitdate date not null,
  l_receiptdate date not null,
  l_shipinstruct char(25) not null,
  l_shipmode char(10) not null,
  l_comment varchar(44) not null)
cluster ord_line_hash_cluster(l_orderkey);

alter table b_lineitem
  add primary key (l_orderkey, l_linenumber);
alter table d_orders
  add primary key (o_orderkey);
alter table d_orders
  add foreign key (o_custkey) references customer;
alter table b_lineitem
  add foreign key (l_orderkey) references d_orders;
alter table b_lineitem
  add foreign key (l_partkey) references part;
alter table b_lineitem
  add foreign key (l_suppkey) references supplier;
alter table b_lineitem
  add foreign key (l_partkey, l_suppkey) references partsupp;

begin

            for x in ( select * from orders )

            loop

                  insert into d_orders

                  values ( x.o_orderkey, x.o_custkey, x.o_orderstatus, 
x.o_totalprice, x.o_orderdate, x.o_orderpriority, x.o_clerk, 
x.o_shippriority, x.o_comment);

                  insert into b_lineitem

                  select *

                    from lineitem

                   where l_orderkey = x.o_orderkey;

            end loop;
end;
/

rename lineitem to c_lineitem;
rename orders to c_orders;
rename b_lineitem to lineitem;
rename d_orders to orders;

3. create index
create index PARTKEY_INDEX on partsupply(ps_partkey);
create bitmap index part_size_bitmap on Part(p_size);
4. analyze index and table
analyze index part_size_bitmap compute statistics;
analyze table p3_nation compute statistics;
5. get query plan
How to trace the query plan for each query?

Answer:First of all, set the environments for tracing query plans:

Change directory to /mnt/dtp/oracle/product/9.2.0/rdbms/admin,

1. log into sqlplus as SYSTEM

2. in sqlplus, run @utlxplan

3. in sqlplus, run CREATE PUBLIC SYNONYM PLAN_TABLE FOR PLAN_TABLE;

4. in sqlplus, run GRANT ALL ON PLAN_TABLE TO PUBLIC;

next step, change directory to /mnt/dtp/oracle/product/9.2.0/sqlplus/admin,

1. sqlplus /nolog

2. in sqlplus, run CONNECT / AS SYSDBA

3. in sqlplus, run @plustrce

4. in sqlplus, run GRANT PLUSTRACE TO PUBLIC;

Now the environment has been set.

To trace the query plan for each query, you can run sqlplus, then run

             SET AUTOTRACE TRACEONLY EXPLAIN

       before you run a SQL statement


@yourscriptofsql.sql

6.  How to get the size of a table, a index or a cluster?

       Answer: Do the following steps:

               1.

                  in sqlplus, create a table space by running

                                CREATE TABLE SPACE (
s_segname varchar(40) not null,
s_type char(10) not null,
s_free_blocks integer not null,
s_total_blocks integer not null,
s_total_bytes integer not null);

   2. in sqlplus, run showspace.sql. Then execute the procedure with some 
parameters.

                      Example 1: to get the size of table emp, run EXEC 
SHOW_SPACE('EMP', user, 'TABLE');
                                         Note that 'EMP' and 'TABLE' must be 
capital.
                                         Then run SELECT * FROM SPACE;
                                         The last line shows the space 
information of table emp.

                      Example 2: to get the size of index dname_idx, run 
EXEC SHOW_SPACE('DNAME_IDX', user, 'INDEX');
                                         Note that 'DNAME_IDX' and 'INDEX' 
must be capital.
                                         Then run SELECT * FROM SPACE;
                                         The last line shows the space 
information of index dname_idx.

                      Example 3: to get the size of cluster 
dept_hash_cluster, run EXEC SHOW_SPACE('DEPT_HASH_CLUSTER', user, 
'CLUSTER');
                                         Note that 'DEPT_HASH_CLUSTER' and 
'CLUSTER' must be capital.
                                         Then run SELECT * FROM SPACE;
                                         The last line shows the space 
information of cluster dept_hash_cluster.

7. how to load data into Oracle?
Using Oracle SQL*Loader



What is SQL*Loader?

SQL*Loader is Oracle’s utility program for loading data into an Oracle 
table.

Most often, SQL*Loader takes two input files �a control file and a data 
file â€?and loads the data into a single Oracle table.  The data file 
contains data, and the control file contains information about the data -- 
where to load it, what to do if something goes wrong, etc.

SQL*Loader has lots and lots of options which can be used to handle various 
types of data and levels of complexity.  SQL*Loader is fully described in 
the Oracle Server Utilities User’s Guide.  This document is just about 
getting started.  SQL*Loader runs on Unix, mainframes, and PC’s.  This 
document is just about running it from a Windows PC.


Why Use SQL*Loader?

If you need to transfer quite a lot of data from your machine to an Oracle 
database table, you might want to use SQL*Loader.   If you already have the 
data in some other format, it may be worthwhile to use SQL*Loader.  If you 
need to transfer local data to a remote database on some recurring basis, it 
may be preferable to use SQL*Loader rather than something like FTP.  At the 
end of this document, there is a brief comparison of FTP versus SQL*Loader.


Getting Started, an Example

Say, for example, that you have the order.tbl file generated by dbgen. The 
schema of the table orders is:

    create table orders (

          o_orderkey integer not null,

          o_custkey integer not null,

          o_orderstatus char(1) not null,

          o_totalprice decimal(15,2) not null,

          o_orderdate date not null,

          o_orderpriority char(15) not null,

          o_clerk char(15) not null,

          o_shippriority integer not null,

          o_comment varchar(79) not null

        );


1) Create your control file.
    Using any text editor, create a file (say, orders.ctl) containing these 
lines:

        LOAD DATA

        INFILE 'orders.tbl'

        REPLACE

        INTO TABLE orders

        FIELDS TERMINATED BY '|'

        TRAILING NULLCOLS

        (

                           O_ORDERKEY,

                           O_CUSTKEY,

                           O_ORDERSTATUS,

                           O_TOTALPRICE,

                           O_ORDERDATE DATE(10) "YYYY-MM-DD",

                           O_ORDERPRIORITY,

                           O_CLERK,

                           O_SHIPPRIORITY,

                           O_COMMENT)



    The REPLACE keyword says, "remove any existing rows before starting the 
load."  There's also an INSERT [into empty table] and APPEND [to existing 
rows] option.

    O_ORDERKEY, O_CUSTKEY, O_ORDERSTATUS, O_TOTALPRICE, O_ORDERDATE, 
O_ORDERPRIORITY, O_CLERK, O_SHIPPRIORITY, O_COMMENT are the actual column 
names defined in the orders table.

    For O_ORDERDATE, since it is a date type, it’s necessary to further 
describe it as DATE(10) "YYYY-MM-DD".

    Notice there could be some missing data in the data file. The TRAILING 
NULLCOLS statement handles the missing data; it tells SQL*Loader to load any 
missing data as NULL values.

2) Run SQL*Loader.
At the prompt, execute SQL*Loader as follows:
     [oracle@fridge oracle]$sqlldr scott/tiger control=orders.ctl

When the load completes, look in the file orders.log.  This log file will 
contain information about how many rows were loaded, how many rows -- if any 
-- were NOT loaded, and other information that may be useful to reassure or 
debug.

8. How to drop a cluster?
       Answer: A cluster index can be dropped without affecting the cluster 
or its clustered tables. However, clustered tables cannot be used if there 
is no cluster index; you must re-create the cluster index to allow access to 
the cluster. Cluster indexes are sometimes dropped as part of the procedure 
to rebuild a fragmented cluster index. To drop a cluster that contains no 
tables, and its cluster index, use the SQL command DROP CLUSTER. For 
example, suppose there is a cluter EMP_DEPT for tables emp and dept, the 
following statement drops the empty cluster named EMP_DEPT:

DROP CLUSTER emp_dept;

          If the cluster contains one or more clustered tables and you 
intend to drop the tables as well, add the INCLUDING TABLES option of the 
DROP CLUSTER command, as follows:

DROP CLUSTER emp_dept INCLUDING TABLES;

        If the INCLUDING TABLES option is not included and the cluster 
contains tables, an error is returned. If one or more tables in a cluster 
contain primary or unique keys that are referenced by FOREIGN KEY 
constraints of tables outside the cluster, the cluster cannot be dropped 
unless the dependent FOREIGN KEY constraints are also dropped. This can be 
easily done using the CASCADE CONSTRAINTS option of the DROP CLUSTER 
command, as shown in the following example:

DROP CLUSTER emp_dept INCLUDING TABLES CASCADE CONSTRAINTS;

        Oracle returns an error if you do not use the CASCADE CONSTRAINTS 
option and constraints exist.

9. a refresh function
--drop table badlog;

--create table badlog( errm varchar2(4000),
--                     data varchar2(4000));

create or replace
function convert_month(p_mon     in varchar2)
return varchar2
is
begin
    case p_mon
      when '01' then return 'JAN';
      when '02' then return 'FEB';
      when '03' then return 'MAR';
      when '04' then return 'APR';
      when '05' then return 'MAY';
      when '06' then return 'JUN';
      when '07' then return 'JUL';
      when '08' then return 'AUG';
      when '09' then return 'SEP';
      when '10' then return 'OCT';
      when '11' then return 'NOV';
      when '12' then return 'DEC';
    end case;
end convert_month;
/

create or replace
function  load_data( p_dir       in varchar2,
     p_orders    in varchar2,
                     p_corders    in varchar2,
                     p_ordersname  in varchar2,
                     p_lineitem     in varchar2,
                     p_clineitem    in varchar2,
                     p_lineitemname in varchar2,
                   p_delimiter in varchar2 default '|'
)
return number
is
    l_orders        utl_file.file_type;
    l_theCursor1    integer default dbms_sql.open_cursor;
    l_buffer1       varchar2(4000);
    l_lineitem      utl_file.file_type;
    l_theCursor2    integer default dbms_sql.open_cursor;
    l_buffer2       varchar2(4000);
    l_colCnt2       number default 0;
    l_status        integer;
    l_colCnt        number default 0;
    l_lastLine      varchar2(4000);
    l_cnt           number default 0;
    l_sep           char(1) default NULL;
    l_errmsg        varchar2(4000);
    l_date          varchar2(12);
    l_random        integer;
    l_flag          boolean;
    l_orderkey1     varchar2(10);
    l_orderkey2     varchar2(10);

begin
        /*
         * This will be the file we read the data from.
         * We are expecting simple delimited data.
         */
    l_orders:= utl_file.fopen( p_dir, p_ordersname, 'r', 4000 ); /* here we 
open the orders.tbl.u1*/
    l_lineitem := utl_file.fopen( p_dir, p_lineitemname, 'r', 4000 );

    l_buffer1 := 'insert into ' || p_orders ||
                '(' || p_corders || ') values ( ';
        /*
         * This counts commas by taking the current length
         * of the list of column names, subtracting the
         * length of the same string with commas removed, and
         * adding 1.
         */
    l_buffer2 := 'insert into ' || p_lineitem ||
                '(' || p_clineitem || ') values ( ';

    l_colCnt := length(p_corders)-
                  length(replace(p_corders,',',''))+1;
    l_colCnt2 := length(p_clineitem)-
                  length(replace(p_clineitem,',',''))+1;

    for i in 1 .. l_colCnt
    loop
        l_buffer1 := l_buffer1 || l_sep || ':b'||i;
        l_sep    := ',';
    end loop;
    l_buffer1 := l_buffer1 || ')';

    l_sep := NULL;
    for i in 1 .. l_colCnt2
    loop
        l_buffer2 := l_buffer2 || l_sep || ':b'||i;
        l_sep    := ',';
    end loop;
    l_buffer2 := l_buffer2 || ')';


        /*
         * We now have a string that looks like:
         * insert into T ( c1,c2,... ) values ( :b1, :b2, ... )
         */
    dbms_sql.parse(  l_theCursor1, l_buffer1, dbms_sql.native );

    dbms_sql.parse(  l_theCursor2, l_buffer2, dbms_sql.native );

    l_flag := FALSE;
    loop
       /*
        * Read data and exit when there is no more.
        */
        begin
            utl_file.get_line( l_orders, l_lastLine );
        exception
            when NO_DATA_FOUND then
                exit;
        end;
        /*
         * It makes it easy to parse when the line ends
         * with a delimiter.
         */
        l_buffer1 := l_lastLine || p_delimiter;


        for i in 1 .. l_colCnt
        loop
          if (i = 5) then
            l_date := substr( l_buffer1, 1, instr(l_buffer1,p_delimiter)-1);
            l_date := substr(l_date, 9, 
2)||'-'||convert_month(substr(l_date,6, 2))||'-'||substr(l_date, 1, 4);
           dbms_sql.bind_variable( l_theCursor1, ':b'||i,  l_date);
         else
            if(i = 1) then
               l_orderkey1 := substr( l_buffer1, 1,
                                instr(l_buffer1,p_delimiter)-1);
            end if;
            dbms_sql.bind_variable( l_theCursor1, ':b'||i,
                            substr( l_buffer1, 1,
                                instr(l_buffer1,p_delimiter)-1 ) );
         end if;
            l_buffer1 := substr( l_buffer1,
                          instr(l_buffer1,p_delimiter)+1 );
        end loop;

        /*
         * Execute the insert statement. In the event of an error
         * put it into the "bad" file.
         */
        begin
            l_status := dbms_sql.execute(l_theCursor1);
            l_cnt := l_cnt + 1;
        exception
            when others then
                l_errmsg := sqlerrm;
--                insert into badlog ( errm, data )
--                values ( l_errmsg, l_lastLine );
        end;

/*      l_random := dbms_random.value(1,7);
        for i in 1 .. l_random
*/      loop
           if (l_flag = FALSE) then
       /*
        * Read data and exit when there is no more.
        */

                begin
                    utl_file.get_line( l_lineitem, l_lastLine );
                exception
                    when NO_DATA_FOUND then
                        exit;
                end;
        /*
         * It makes it easy to parse when the line ends
        * with a delimiter.
         */

                l_buffer2 := l_lastLine || p_delimiter;

                for i in 1 .. l_colCnt2
                loop
                  if (i = 11 or i = 12 or i = 13) then
                    l_date := substr( l_buffer2, 1, 
instr(l_buffer2,p_delimiter)-1);
                    l_date := substr(l_date, 9, 
2)||'-'||convert_month(substr(l_date,6, 2))||'-'||substr(l_date, 1, 4);
                    dbms_sql.bind_variable( l_theCursor2, ':b'||i,  l_date);
                 else
                    if(i = 1) then
                        l_orderkey2 := substr( l_buffer2, 1,
                                instr(l_buffer2,p_delimiter)-1);
                    end if;
                    dbms_sql.bind_variable( l_theCursor2, ':b'||i,
                                    substr( l_buffer2, 1,
                                        instr(l_buffer2,p_delimiter)-1 ) );
                 end if;
                    l_buffer2 := substr( l_buffer2,
                                  instr(l_buffer2,p_delimiter)+1 );
                end loop;

                if l_orderkey1 != l_orderkey2 then
                    l_flag := TRUE;
                    exit;
                end if;
           end if;


           /*
            * Execute the insert statement. In the event of an error
            * put it into the "bad" file.
            */
           begin
                l_flag := FALSE;
                l_status := dbms_sql.execute(l_theCursor2);
                l_cnt := l_cnt + 1;
           exception
                when others then
                    l_errmsg := sqlerrm;
  --                  insert into badlog ( errm, data )
  --                  values ( l_errmsg, l_lastLine );
           end;

        end loop;

    end loop;

    /*
     * close up and commit
     */
    dbms_sql.close_cursor(l_theCursor1);
    dbms_sql.close_cursor(l_theCursor2);
    utl_file.fclose( l_orders);
    utl_file.fclose( l_lineitem );
    commit;

    return l_cnt;
exception
when others then
    dbms_sql.close_cursor(l_theCursor1);
if ( utl_file.is_open( l_orders ) ) then
utl_file.fclose(l_orders );
end if;
        dbms_sql.close_cursor(l_theCursor2);
                if ( utl_file.is_open( l_lineitem ) ) then
                        utl_file.fclose(l_lineitem);
                end if;
RAISE;
end load_data;
/

begin
   dbms_output.put_line(
       load_data(
/* NOTE: modify the parameters here after you modify the function*/
                  'UTL_FILE_DIR',
                  'ORDERS',
                  
'O_ORDERKEY,O_CUSTKEY,O_ORDERSTATUS,O_TOTALPRICE,O_ORDERDATE,O_ORDERPRIORITY,O_CLERK,O_SHIPPRIORITY,O_COMMENT',
                  'orders.tbl.u2',
                  'LINEITEM',
                  
'L_ORDERKEY,L_PARTKEY,L_SUPPKEY,L_LINENUMBER,L_QUANTITY,L_EXTENDEDPRICE,L_DISCOUNT,L_TAX,L_RETURNFLAG,L_LINESTATUS,L_SHIPDATE,L_COMMITDATE,L_RECEIPTDATE,L_SHIPINSTRUCT,L_SHIPMODE,L_COMMENT',
                  'lineitem.tbl.u2',
                  '|' ) || ' rows loaded' );
end;
/

--select * from badlog;

10. generating dataset and data
Generate datasets

1.      Go to http://phlox.encs.concordia.ca/dtp/software, download 
20000511.zip, save it in directory /mnt/dtp. Uncompress it by running 
command

[oracle@fridge dtp]$ unzip 20000511.zip

In directory appendix/dbgen, it contains all source files for dbgen and 
qgen.



2.      Go to directory appendix/dbgen, change the mode of the files in this 
directory to 700.

[oracle@fridge dbgen]$ chmod 700 *



3.      Before compiling the source code, create the makefile from 
makefile.suite first:

[oracle@fridge dbgen]$ cp makefile.suite makefile

modify makefile by replacing line 5 with

              CC = gcc

and replacing line 11-13 with:

DATABASE= SQLSERVER

MACHINE = SUN

WORKLOAD = TPCH



4.      Run “makeâ€? you will get dbgen and qgen.

5.      Read README to see how to use dbgen and qgen.
Generating queries

1.      copy the file dists.dss and dbgen in appendix/dbgen to 
appendix/queries

       [oracle@fridge dbgen]$ cp dists.dss ../queries

       [oracle@fridge dbgen]$ cp qgen ../queries



2.      Go to directory appendix/queries, in this directory, there are 22 
query templates.

       Open 7.sql, delete "as shipping" in line 37.

       Open 8.sql, delete "as all_nations" in line 39.

       Open 9.sql, delete "as profit" in line 32.

       Open 13.sql, append " as c_count" to the end of line 14, delete "as 
c_orders(c_custkey, c_count)" in line 21.

       Open 22.sql, modify "substring(c_phone from 1 for 2)" in lines 14, 
19, and 28 as "substr(c_phone, 1, 2)". Delete "as castsale" in line 39.



3.      Run qgen. Note that you have to run qgen for 13.sql independently 
due to a bug in qgen.



4.      In the end of each generated query, there are two line "set rowcount 
XX" and "go" which are not recognize by sqlplus, so you have to delete them.


11. Suppose you create constraints and forget to name them, how can you remove them?
There is a small chance that you check the system name of constraint:
select constraint_name, constraint_type from user_constraint_table 
	where table_name='CAPITAL_TABLE_NAME_OF_YOUR_TABLE';
12. To drop cluster:
	drop cluster cluster_name including tables;
13. Check all table:
	select * from tab;

 Re: Is this a bug in timestamp protocol

¡¡
Hi Sir,
¡¡
I have a question regarding the scheduler rule of timestamp on textbook page 974 (The book is "the complete book". I don't know the page number of "the implementation one". Anyway it is also in your slides "Rules for Time-stamp based scheduling")
¡¡
In the read rule, it says:
1. Request is read:
If TS(X)>=WT(X) the read is physically realizable.
ii) if c(X) is false, delay T until c(X) true or the transaction that wrote X aborts;
¡¡
Here I think it is wrong. We must treat in two cases:
a) if TS(X)>WT(X), then it is like the book.
b) if TS(X)==WT(X), then NO MATTER IF c(X) IS TRUE OR FALSE, IT SHOULD BE GRANTED FOR READ.
¡¡
The reason is simple: if there is only one transaction and it writes first and read later. It will be stuck forever because when it writes, it will set the c(X) to be false. Later, even there is no other transaction competing, the transaction still cannot proceed because c(X) is false.
¡¡
Am I right? Can you justify this for me?
¡¡
Thank you very much,
¡¡
Nick Huang/Qingzhe Huang
			regarding the huge number of handles created by MYSQL in windowsXP
Here it starts my question:
I find it rather ridiculous. You see, "mysql" uses 56k handles and for every query it increases several handles more. Does that mean that there is a serious resourse leaking which I find it hard to believe.
Have you heard about this kind of joke for "windows version" of "mysql"? I observe this from my database client program. Then I do query in its command line and the handles increases for every simple query like "select count(*) from ...".
¡¡
I cannot believe it!
¡¡
Any comment on this?

W sends me this link which is very educational. In case the link may be invalid, I copy the content here. (somebody asks similar question!)

RE: MySQL 5.0 creates 100,000 Window handles

Hi,

In fact, this is a good question. Therefore, I gonna try to answer.

> 1) What is MySQL using the handles for?
Handles are nothing and all on Windows. Nothing because they are only
pointers to hidden internal struct. And all, because, Handle are =
everywhere
if you try to develop Win32 App.
There are 3 kinds:
Users: Window, Cursor, Menu,...
GDI: all graphic objects such as Brush, Pen,...
Kernel: Access token (ACL), Console input, Event, File, Heap, Mutex, =
Pipe,
Process, Semaphore, Socket, Thread, Timer, ...
A database server is therefore a great consumer of kernel Handles.

> 4) How does a MySQL server handle millions of queries on a large =
table?
> Will it hit an upper limit of Handles that it can allocate?
The per-process theoretical limit on kernel handles is 2^24. However,
handles are stored in the paged pool (kernel reserved memory), so the =
actual
number of handles you can create is based on available memory. So, the
number of handles that you can create on 32-bit Windows is significantly
lower than 2^24. Example on Windows2000, max pool size is 300 MB (I =
don't
know on most recent windows versions).
But be sure MySQL server can handle millions of queries on a large =
table.

> 2) Is there any way to stop MySQL from consuming so many Windows
> resources?
I investigated and can summarize with a simple test I made with a 5.0.37
compiled with all storage engines (and verified with a 4.1.21): MySQL
allocates 43000 handles. I recompiled it without InnoDB and BDB, and =
MySQL
allocates now 108 handles at startup. I'm not an "indeep" Innodb's
specialist neither BDB, but I know they have row locking mechanism in
difference of MyIsam but I'm sure that they are great consumers of =
Mutex.

> Windows of course runs slow with this many handles allocated.
Yes, not because of number of handles (logical resources) but because of
physical resources, especially RAM, and perhaps by-design in OS kernel.


Regards,
Geoffroy
¡¡
I noticed if my program executes a lot of Select statements, Windows XP 
will slow down when the program completes. I did some investigating and 
mysqld-nt.exe has close to 100,000 handles created when my program ends 
(shown in Task Manager and SysInternals Process Explorer). As each =
Select 
statement is executed, 2 handles are created. These handles will stay 
allocated until the MySQL server is stopped (stopping my program won't =
free 
up the handles). Windows of course runs slow with this many handles
allocated.

1) What is MySQL using the handles for?
2) Is there any way to stop MySQL from consuming so many Windows =
resources?
3) Is it like this on Linux?
4) How does a MySQL server handle millions of queries on a large table? 
Will it hit an upper limit of Handles that it can allocate?

Note: it does not appear to allocate more handles if the query is found =
in 
the query cache.

Mike 

RE: MySQL 5.0 creates 100,000 Window handles

Geoffroy,
Thanks for the reply. Of course by now I figured out what's causing=

the large # of handles being allocated by Windoze XP. It's the query
cache. Each query that gets added to the query cache uses 2 handles. If the=

query cache is large, say 150MB, as the cache fills up more and more
handles are allocated. I was able to create a test program and generated
hundreds of thousands of simple queries that returned a small # of rows
from 1 large table (there were no cache hits because I wanted to fill the
query cache up with as many unique queries as possible). I let it run
overnight and in the morning Windows had allocated over 600,000 handles! It=

would have been more but the query cache was full by then. If I were to
increase the query_cache_size then I'm sure I could get XP to allocate over=

1 million handles. Flushing the query cache of course releases the handles.=

I'm not sure in the same thing happens in Linux or not. Does it? Should I=

be worried running MySQL on an XP box that has to run 24/7?

Mike

            This is my little work of database
A) Views
----------------------------------------------------------------------------------------------------

1. view of enactivate_view:


i) enactivate_view_start_middle 
use datawind
if exists (select name from sysobjects
   where name='enactivate_view_start_middle ' and type='V')
   drop view enactivate_view_start_middle 

go 
use datawind
go

create view enactivate_view_start_middle (sim, enactivate_date) as 
select sim, max(tariff_end) as 'enactivate_date' 
from orangebills 
where tariff='DW_inactive' group by sim --order by sim,last_end
----------------------------------------------------------------------------------------------------
ii) enactivate_view_since_beginning 
use datawind
if exists (select name from sysobjects
   where name='enactivate_view_since_beginning' and type='V')
   drop view enactivate_view_since_beginning 

go 
use datawind
go


create view enactivate_view_since_beginning (sim, enactivate_date) as
select a.sim, min(a.tariff_start) as enactivate_date 
from orangebills a 
where a.sim not in (select b.sim from enactivate_view_start_middle b)
and 
tariff='DW_active'
group by 
a.sim
----------------------------------------------------------------------------------------------------
iii) enactivate_view 

use datawind
if exists (select name from sysobjects
   where name='enactivate_view' and type='V')
   drop view enactivate_view  

go 
use datawind
go



create view enactivate_view (sim, enactivate_date) as

select a.sim, a.enactivate_date from enactivate_view_start_middle a
union
select b.sim, b.enactivate_date from enactivate_view_since_beginning b
----------------------------------------------------------------------------------------------------
2. enactivate_one_year_view 

use datawind
if exists (select name from sysobjects
   where name='enactivate_one_year_view' and type='V')
   drop view enactivate_one_year_view

go 
use datawind
go
create view enactivate_one_year_view (sim, enactivate_date) as
select * from enactivate_view a where datediff(day, a.enactivate_date, getdate())>365
go
----------------------------------------------------------------------------------------------------

3. last_three_month_no_usage_view

i) last_three_month_have_usage_view

use datawind
if exists (select name from sysobjects
   where name='last_three_month_have_usage_view' and type='V')
   drop view last_three_month_have_usage_view

go 
use datawind
go


create view last_three_month_have_usage_view(sim, enactivate_date) as

select distinct a.sim, a.enactivate_date
from enactivate_one_year_view a, orangebills b
where 
  a.sim=b.sim 

 and
 
  datediff(month, b.tariff_end, getDate())<3  

and 
(
  b.total_MB   <> 0.0 
or 
  b.roaming_MB <> 0.0
)
go
----------------------------------------------------------------------------------------------------
ii)last_three_month_no_usage_view

go 
use datawind
if exists (select name from sysobjects 
    where name='last_three_month_no_usage_view' and type='V')
    drop view last_three_month_no_usage_view

go 
use datawind
go

create view last_three_month_no_usage_view(sim, enactivate_date) as

select sim, enactivate_date from enactivate_one_year_view
where sim
not in
(select sim from last_three_month_have_usage_view) 

go

----------------------------------------------------------------------------------------------------
4. enactivate_eleven_month_view 
use datawind
if exists (select name from sysobjects
   where name='enactivate_eleven_month_view' and type='V')
   drop view enactivate_eleven_month_view

go 
use datawind
go


create view enactivate_eleven_month_view (sim, enactivate_date) as
select *
from enactivate_view  
where 
	datediff(month, enactivate_date, getdate())>=11
	and sim in
	(
		select distinct sim
		from 				
			orangebills 			
		where 			  
		 
		  datediff(month, tariff_end, getdate())<3  
		
		and 
		(
		  total_MB   <> 0.0 
		or 
		  roaming_MB <> 0.0
		)
	)

----------------------------------------------------------------------------------------------------
B) Stored Procedures
1) enactivate_eleven_month_have_usage_proc


use datawind
if exists (select name from sysobjects
   where name='enactivate_eleven_month_have_usage_proc' and type='P')
   drop procedure enactivate_eleven_month_have_usage_proc

go 
use datawind
go


CREATE PROCEDURE enactivate_eleven_month_have_usage_proc
@startdate datetime,
@enddate   datetime,
@number    integer output

as


select @number=count(sim)
from enactivate_view  
where 
	datediff(day, enactivate_date, @enddate)>=(365-30)
	and sim in
	(
		select distinct sim
		from 				
			orangebills 			
		where 			  
		 
		  datediff(month, tariff_end, @enddate)<3  
		
		and 
		(
		  total_MB   <> 0.0 
		or 
		  roaming_MB <> 0.0
		)
	)
        and sim not in

		

		(
	 		select sim
			from enactivate_view  
			where 
				datediff(day, enactivate_date, @startdate)>=(365-30)
				and sim in
				(
					select distinct sim
					from 				
						orangebills 			
					where 			  
					 
					  datediff(month, tariff_end, @startdate)<3  
					
					and 
					(
					  total_MB   <> 0.0 
					or 
					  roaming_MB <> 0.0
					)
				)		
		
		)



--this is using simple number subtraction

/*
declare    @@start integer
declare    @@end   integer



select @@start=count(sim)
from enactivate_view  
where 
	datediff(month, enactivate_date, @startdate)>=11
	and sim in
	(
		select distinct sim
		from 				
			orangebills 			
		where 			  
		 
		  datediff(month, tariff_end, @startdate)<3  
		
		and 
		(
		  total_MB   <> 0.0 
		or 
		  roaming_MB <> 0.0
		)
	)


select @@end=count(sim)
from enactivate_view  
where 
	datediff(month, enactivate_date, @enddate)>=11
	and sim in
	(
		select distinct sim
		from 				
			orangebills 			
		where 			  
		 
		  datediff(month, tariff_end, @enddate)<3  
		
		and 
		(
		  total_MB   <> 0.0 
		or 
		  roaming_MB <> 0.0
		)
	)

set @number = @@end-@@start
*/


2) last_three_month_no_usage_count_proc

use datawind
if exists (select name from sysobjects
   where name='last_three_month_no_usage_count_proc' and type='P')
   drop procedure last_three_month_no_usage_count_proc

go 
use datawind
go


CREATE PROCEDURE last_three_month_no_usage_count_proc
@currentDate datetime,
@number       integer  OUTPUT
as

select @number = count(sim)
from 	
	enactivate_view  
where 
        datediff(day, enactivate_date,  @currentDate)>365	
and 
	( sim  not in
		--have usage in last three month
		(
			select distinct sim
			from 				
				orangebills 			
			where 			  
			 
			  datediff(month, tariff_end, @currentDate)<3  
			
			and 
			(
			  total_MB   <> 0.0 
			or 
			  roaming_MB <> 0.0
			)
		)
	)
--order by enactivate_date
go


3) last_three_month_no_usage_interval_proc

use datawind
if exists (select name from sysobjects
   where name='last_three_month_no_usage_interval_proc' and type='P')
   drop procedure last_three_month_no_usage_interval_proc

go 
use datawind
go


CREATE PROCEDURE last_three_month_no_usage_interval_proc
@startdate datetime,
@enddate   datetime,
@number    integer output
as





select  @number = count(sim)
from 	
	enactivate_view  
where 
        datediff(day, enactivate_date,  @enddate)>365	
and 
	( sim  not in		
		(
			select distinct sim
			from 				
				orangebills 			
			where 			  
			 
			  datediff(month, tariff_end, @enddate)<3  
			
			and 
			(
			  total_MB   <> 0.0 
			or 
			  roaming_MB <> 0.0
			)
		)
	)
and     sim not in
		(
			select  sim
			from 	
				enactivate_view  
			where 
			        datediff(day, enactivate_date,  @startdate)>365	
			and 
				( sim  not in
					--have usage in last three month
					(
						select distinct sim
						from 				
							orangebills 			
						where 			  
						 
						  datediff(month, tariff_end, @startdate)<3  
						
						and 
						(
						  total_MB   <> 0.0 
						or 
						  roaming_MB <> 0.0
						)
					)
				)

		)
	







--order by enactivate_date


/*
declare @@start integer
declare @@end   integer

exec last_three_month_no_usage_count_proc @startdate, @@start output
exec last_three_month_no_usage_count_proc @enddate, @@end  output
set @number = @@end -@@start
*/
go


4) last_three_month_no_usage_proc

use datawind
if exists (select name from sysobjects
   where name='last_three_month_no_usage_proc' and type='P')
   drop procedure last_three_month_no_usage_proc

go 
use datawind
go


CREATE PROCEDURE last_three_month_no_usage_proc
@currentDate datetime
as

select @currentdate as currentdate, count(sim) ToBeTerminated
from 	
	enactivate_view  
where 
        datediff(day, enactivate_date,  @currentDate)>365	
and 
	( sim  not in
		--have usage in last three month
		(
			select distinct sim
			from 				
				orangebills 			
			where 			  
			 
			  datediff(month, tariff_end, @currentDate)<3  
			
			and 
			(
			  total_MB   <> 0.0 
			or 
			  roaming_MB <> 0.0
			)
		)
	)
--order by enactivate_date
go



5) enactivate_eleven_month_have_usage_proc


use datawind
if exists (select name from sysobjects
   where name='enactivate_eleven_month_have_usage_proc' and type='P')
   drop procedure enactivate_eleven_month_have_usage_proc

go 
use datawind
go


CREATE PROCEDURE enactivate_eleven_month_have_usage_proc
@startdate datetime,
@enddate   datetime,
@number    integer output

as


select @number=count(sim)
from enactivate_view  
where 
	datediff(day, enactivate_date, @enddate)>=(365-30)
	and sim in
	(
		select distinct sim
		from 				
			orangebills 			
		where 			  
		 
		  datediff(month, tariff_end, @enddate)<3  
		
		and 
		(
		  total_MB   <> 0.0 
		or 
		  roaming_MB <> 0.0
		)
	)
        and sim not in

		

		(
	 		select sim
			from enactivate_view  
			where 
				datediff(day, enactivate_date, @startdate)>=(365-30)
				and sim in
				(
					select distinct sim
					from 				
						orangebills 			
					where 			  
					 
					  datediff(month, tariff_end, @startdate)<3  
					
					and 
					(
					  total_MB   <> 0.0 
					or 
					  roaming_MB <> 0.0
					)
				)		
		
		)








/*
declare    @@start integer
declare    @@end   integer



select @@start=count(sim)
from enactivate_view  
where 
	datediff(month, enactivate_date, @startdate)>=11
	and sim in
	(
		select distinct sim
		from 				
			orangebills 			
		where 			  
		 
		  datediff(month, tariff_end, @startdate)<3  
		
		and 
		(
		  total_MB   <> 0.0 
		or 
		  roaming_MB <> 0.0
		)
	)


select @@end=count(sim)
from enactivate_view  
where 
	datediff(month, enactivate_date, @enddate)>=11
	and sim in
	(
		select distinct sim
		from 				
			orangebills 			
		where 			  
		 
		  datediff(month, tariff_end, @enddate)<3  
		
		and 
		(
		  total_MB   <> 0.0 
		or 
		  roaming_MB <> 0.0
		)
	)

set @number = @@end-@@start
*/



6) enactivate_eleven_month_have_usage_proc

use datawind
if exists (select name from sysobjects
   where name='enactivate_eleven_month_have_usage_proc' and type='P')
   drop procedure enactivate_eleven_month_have_usage_proc

go 
use datawind
go


CREATE PROCEDURE enactivate_eleven_month_have_usage_proc
@startdate datetime,
@enddate   datetime,
@number    integer output

as


select @number=count(sim)
from enactivate_view  
where 
	datediff(day, enactivate_date, @enddate)>=(365-30)
	and sim in
	(
		select distinct sim
		from 				
			orangebills 			
		where 			  
		 
		  datediff(month, tariff_end, @enddate)<3  
		
		and 
		(
		  total_MB   <> 0.0 
		or 
		  roaming_MB <> 0.0
		)
	)
        and sim not in

		

		(
	 		select sim
			from enactivate_view  
			where 
				datediff(day, enactivate_date, @startdate)>=(365-30)
				and sim in
				(
					select distinct sim
					from 				
						orangebills 			
					where 			  
					 
					  datediff(month, tariff_end, @startdate)<3  
					
					and 
					(
					  total_MB   <> 0.0 
					or 
					  roaming_MB <> 0.0
					)
				)		
		
		)




















5. report
i) To Terminate
select * from last_three_month_no_usage_view order by enactivate_date

ii) To Promote
select * from enactivate_eleven_month_view order by enactivate_date

iii) To Report
select getDate() as CurrentDate,
(select count(a.sim)  from last_three_month_no_usage_view a)as To_Terminate,(select count(b.sim) 
from  enactivate_eleven_month_view b) as To_Promote

iv)



declare @startdate datetime
declare @enddate datetime
declare @terminate  integer
declare @promote  integer
set @startdate =getdate()
set @enddate=dateadd(week,1, @startdate)


/*
while ( @startdate< cast('2008-12-31' as smalldatetime))
begin
--exec last_three_month_no_usage_interval_proc @startdate,@enddate,@terminate output
exec enactivate_eleven_month_have_usage_proc @startdate,@enddate,@promote output

select convert(varchar(20), @startdate, 11) as startdate,convert(varchar(20), 
	@enddate, 11) as enddate,@terminate as ToTerminate, @promote as ToPromote 
set @startdate = @enddate
set @enddate=dateadd(week,1, @startdate)
end
*/









set @startdate =cast('2008-10-29' as smalldatetime)

set @enddate=dateadd(week,1, @startdate)



select a.sim, a.enactivate_date, b.total_MB,b.roaming_MB,b.tariff_start,b.tariff_end, b.month_char
from enactivate_view a, orangebills b  
where 
	a.sim=b.sim
	and
	datediff(month, enactivate_date, @enddate)>=11
	and a.sim in
	(
		select distinct sim
		from 				
			orangebills 			
		where 			  
		 
		  datediff(month, tariff_end, @enddate)<3  
		
		and 
		(
		  total_MB   <> 0.0 
		or 
		  roaming_MB <> 0.0
		)
	)
        and a.sim not in

		

		(
	 		select sim
			from enactivate_view  
			where 
				datediff(month, enactivate_date, @startdate)>=11
				and sim in
				(
					select distinct sim
					from 				
						orangebills 			
					where 			  
					 
					  datediff(month, tariff_end, @startdate)<3  
					
					and 
					(
					  total_MB   <> 0.0 
					or 
					  roaming_MB <> 0.0
					)
				)		
		
		)




order by a.sim, b.tariff_start





/*

select @result = count(sim)
from 	
	enactivate_view  
where 
        datediff(day, enactivate_date,  @startDate)>365	
and 
	( sim  not in
		--have usage in last three month
		(
			select distinct sim
			from 				
				orangebills 			
			where 			  
			 
			  datediff(month, tariff_end, @startDate)<3  
			
			and 
			(
			  total_MB   <> 0.0 
			or 
			  roaming_MB <> 0.0
			)
		)
	)
--order by enactivate_date
*/



--exec last_three_month_no_usage_count_proc @startdate, @result output
--select @startdate, @result



6. Master view

create view master_view (sim, enactivate_date, month_char, total_charges,tariff_start, tariff_end, tariff, total_mb, 
roaming_GBP,line_rental,OB_GBP,roaming_MB,other_GBP) as

select a.sim as sim, a.enactivate_date as enact_date, 
b.month_char, b.total_charges, b.tariff_start, b.tariff_end, b.tariff, b.total_mb, 
b.roaming_GBP,b.line_rental,b.OB_GBP,b.roaming_MB,b.other_GBP
from enactivate_view_start_middle a, orangebills b
where a.sim=b.sim and  a.enactivate_date=b.tariff_end

union 
select a.sim as sim, a.enactivate_date as enact_date, 
b.month_char, b.total_charges, b.tariff_start, b.tariff_end, b.tariff, b.total_mb,
b.roaming_GBP,b.line_rental,b.OB_GBP,b.roaming_MB,b.other_GBP
from enactivate_view_since_beginning  a, orangebills b 
where (a.sim=b.sim and  a.enactivate_date=b.tariff_start)
¡¡
	Common SqlServer Problems (digested from MSDN)
Not associated with a trusted sql server connection
There are two modes for authenticating user connections in SQL Server and MSDE:
SQL Server Authentication
Windows Authentication



The server can be configured to accept Windows Authentication only or Both authentication modes. If the server is configured to accept Windows Authentication only, clients connecting with a SQL Server account may get this error message. Also, if you are connecting by specifying Windows Authentication and your Windows account context does not have access to the server, you may get this error message.
Changing SQL Server and MSDE authentication mode
By default, the value of the LoginMode Windows registry subkey is set to 1 for Windows Authentication and 2 for Mixed Mode Authentication. To enable Mixed Mode authentication after installation, you must change this value to 2.



To change the value of LoginMode to 2, follow these steps: 

In Control Panel, open the Services tool to stop MSSQLSERVER and all other related services (such as SQLSERVERAgent) 
To open Registry Editor, click Start, click Run, type regedt32, and then click OK. 
Locate either of the following subkeys (depending on whether you installed SQL Server/MSDE as a default instance or as a named instance): 
Default Instance: 
HKEY_LOCAL_MACHINE\Software\Microsoft\MSSqlserver\MSSqlServer\

-or- 

Named Instance: 
HKEY_LOCAL_MACHINE\Software\Microsoft\Microsoft SQL Server\Instance Name\MSSQLServer\ 

In the right pane, double-click the LoginMode subkey. 
In the DWORD Editor dialog box, set the value of this subkey to 2, make sure that the Hex option is selected, and then click OK. 
Restart the MSSQLSERVER and the SQLSERVERAgent services for this change to take effect. 

To Adjust the size of "tempdb"
1. Stop SQL Server. Open a command prompt, and then 
						start SQL Server by typing the following command:

sqlservr -c -f

The -c and -f parameters cause SQL Server to start in a minimum configuration mode with a tempdb size of 1 MB for the data file and 0.5 MB for the log file.

NOTE: If you use a SQL Server named instance, you must change to the appropriate folder (Program Files\Microsoft SQL Server\MSSQL$instance name\Binn) and use the -s switch (-s%instance_name%).
2. Connect to SQL Server with Query Analyzer, and then run the following Transact-SQL commands:
ALTER DATABASE tempdb MODIFY FILE
(NAME = 'tempdev', SIZE = target_size_in_MB)
--Desired target size for the data file

ALTER DATABASE tempdb MODIFY FILE
(NAME = 'templog', SIZE = target_size_in_MB)
--Desired target size for the log file

3. Stop SQL Server by pressing Ctrl-C at the command prompt window, restart SQL Server as a service, and then verify the size of the Tempdb.mdf and Templog.ldf files.
¡¡
¡¡

                        back.gif (341 bytes)       up.gif (335 bytes)         next.gif (337 bytes)