When chaos begin...
Here it goes....
The code from P.P.
Here it comes....
your basic understanding of what yield (); does is correct
I can't debug your code over e-mail
hi professor,
Thank you for your mail!
Enclosed pls find my source file. I didn't change anything from your source
code except:
1. Initialize the Deamon's array to be {0, 4, 4, 4, 6, 4, 6, 4, 6};
2. Insert one display statement inside Deamon's interrupt function:
if ( d[tid] > 4 )
{
System.out.println("worker "+tid+ " is interrupted.");
Thread.yield ();
}
3. All others are unchanged.
4. There is one more thing I observed that when thread sends out signal, it
only wake up the waiting thread (in this case main thread) next round. That
is suppose thread 8 is sends out signal and terminate, the next running
thread is thread 7 instead of signaled thread----main thread.
5. According to your lecture, I guess that when a process is doing the I/O
CPU schedule should then place it into a waiting queue. So, I think
similarly when a thread is doing the output of messages which is very slow,
it also should be relinquished from CPU, and next thread at head of ready
queue should run. But it doesn't seem so from my result. Is it because Java
thread is user thread instead of kernel thread?
Thank you for your time,
Nick Huang/Qingzhe Huang
Here it comes...
I think you are mixing tons of different things
for example, these aren't Unix signals; they're semaphore signals
thread 8 provides a token for semaphore 8
the main program tries, in order, to acquire tokens 1 through 8
if the main program is blocked waiting for token 1, it has no
effect if thread 8 stockpiles an "8" token for later use
these are Java threads, not OS anythings; I/O shouldn't change
anything
you can't translate lectures about a real OS into the semantics
of a programming language
thread 7 runs after thread 8 because it is next on the ready
queue; the main thread doesn't become unblocked until all
workers have signalled
Here it goes...
Hi Professor,
Thank you so much for your message and I can't agree with you more!
1. Surely the class Semaphore is not what the name implies. It is not Unix
signals.
2. Java thread is user thread (at least it is true for version 1.2) and OS
is not aware the existence of thread. So, I/O operation won't be blocked.
3. And all other things are not important except my question about the
yielded thread in my first email. That is why the yielded thread continues
work.
4. I now repeat the whole question as following:
----a) The original source code attached is same as I sent last mail.
----b) The running result is like following:
----c) The Deamon array is {0, 4, 4, 4, 6, 4, 6, 4, 6} which implies the
thread "8", "6","4" will be interrupted by Deamon or in other words, should
yield.
----d) The yielded thread should relinquish CPU and be placed on back of
ready queue.
----e) Thread "8" gets the the first interrupt as shown below and it is in
consistence as our expectation.
----f) Thread "6", "4" get interrupt as shown below and they both continue
as if interrupt doesn't happen!!!!This makes no senses!!! Can you explain
this?
----g) I compiled this code in server "alpha" through "SSH" client, and I
also changed code in many tests and get similar results.
----h) Can you check the result by yourself? And I will be very grateful.
Thank you in advance for your attention.
Have a nice day,
p.s. The following is the running results based on attached source code in
which I only add one more displaying statement within "interrupt function"
of Daemon class.
Eight workers have started.
Main thread continues work.
Main thread waits on semaphore 1.
Worker 8 begins execution.
Worker 7 begins execution.
Worker 6 begins execution.
Worker 5 begins execution.
Worker 4 begins execution.
Worker 3 begins execution.
Worker 2 begins execution.
Worker 1 begins execution.
Worker 8 doubles own array component.
worker 8 is interrupted. //-------------------This is
first interrupt of thread "8", and thread "7" continues instead.
Worker 7 doubles own array component.
Worker 6 doubles own array component.
Worker 5 doubles own array component.
Worker 4 doubles own array component.
Worker 3 doubles own array component.
Worker 2 doubles own array component.
Worker 1 doubles own array component.
Worker 8 sets neighbor's array component.
Worker 8 signals semaphore 8.
Worker 8 terminates.
Worker 7 sets neighbor's array component.
Worker 7 signals semaphore 7.
Worker 7 terminates.
worker 6 is interrupted. //---------This is
second interrupt of thread "6", but it continues!!!!
Worker 6 sets neighbor's array component.
Worker 6 signals semaphore 6.
Worker 6 terminates.
Worker 5 sets neighbor's array component.
Worker 5 signals semaphore 5.
Worker 5 terminates.
worker 4 is interrupted. //---------This is third
interrupt of thread "4", but it continues!!!!
Worker 4 sets neighbor's array component.
Worker 4 signals semaphore 4.
Worker 4 terminates.
Worker 3 sets neighbor's array component.
Worker 3 signals semaphore 3.
Worker 3 terminates.
Worker 2 sets neighbor's array component.
Worker 2 signals semaphore 2.
Worker 2 terminates.
Worker 1 sets neighbor's array component.
Worker 1 signals semaphore 1.
Worker 1 terminates.
Main thread reads 1 from position 1.
Main thread waits on semaphore 2.
Main thread reads 1 from position 2.
Main thread waits on semaphore 3.
Main thread reads 1 from position 3.
Main thread waits on semaphore 4.
Main thread reads 1 from position 4.
Main thread waits on semaphore 5.
Main thread reads 1 from position 5.
Main thread waits on semaphore 6.
Main thread reads 1 from position 6.
Main thread waits on semaphore 7.
Main thread reads 1 from position 7.
Main thread waits on semaphore 8.
Main thread reads 2 from position 8.
Sum = 9
System terminates normally.
Nick Huang/Qingzhe Huang
Here it comes....
when thread 8 yields, control goes to thead 7 earlier (i.e., before
more statements of thread 8) than it otherwise would
compare thread 8's behavior (_all_ its behavior) in your printout
with the all 4s printout
see how long it takes thread 8 to circle around before setting
its neighbor's array component
THAT'S the result of the interrupt!
Here it goes...
Hi Professor,
I cannot follow your explanation either because my English is too poor or we
are not talking about the same question. To eliminate the second
possibility, I indexed all results as following.
1) The problem is not that thread "8" doesn't behave well. On the contrary,
it is exact what we expect for it.
2) The problem is thread "6" and thread"4".
3) On "output 27", thread "6" is interrupted because we see it displayed
this line BEFORE it is interrupted.
4) On "output 28" we don't see thread "5", instead thread "6" continues to
be alive!!This is the problem!
5) Similar situation happens on "output 34" where thread "4" claims it will
be interrupted in next CPU execution, however, on next "output" which is
"output 35" thread "4" claims it is still alive.
6) I checked MSDN and function "yield" is exactly what its name implies
which is same as Unix: "Causes the currently executing thread object to
temporarily pause and allow other threads to execute. "
7) Function "yield" would not change the sequence of execution of these same
priority threads, that is in a sequence of
8,7,6,5,4,3,2,1,8,7,6,5,4,3,2,1,8,7,6,5,4,3,2,1... It only forces the
current thread to relinquish the controling of CPU from the rest of time
slice.
Thank you so much for your time and attention,
Nick Huang/Qingzhe Huang
p.s. The following results are indexed by me for references.
1 Eight workers have started.
2 Main thread continues work.
3 Main thread waits on semaphore 1.
4 Worker 8 begins execution.
5 Worker 7 begins execution.
6 Worker 6 begins execution.
7 Worker 5 begins execution.
8 Worker 4 begins execution.
9 Worker 3 begins execution.
10 Worker 2 begins execution.
11 Worker 1 begins execution.
12 Worker 8 doubles own array component.
13 worker 8 is interrupted.
14 Worker 7 doubles own array component.
15 Worker 6 doubles own array component.
16 Worker 5 doubles own array component.
17 Worker 4 doubles own array component.
18 Worker 3 doubles own array component.
19 Worker 2 doubles own array component.
20 Worker 1 doubles own array component.
21 Worker 8 sets neighbor's array component.
22 Worker 8 signals semaphore 8.
23 Worker 8 terminates.
24 Worker 7 sets neighbor's array component.
25 Worker 7 signals semaphore 7.
26 Worker 7 terminates.
27 worker 6 is interrupted.
28 Worker 6 sets neighbor's array component.
29 Worker 6 signals semaphore 6.
30 Worker 6 terminates.
31 Worker 5 sets neighbor's array component.
32 Worker 5 signals semaphore 5.
33 Worker 5 terminates.
34 worker 4 is interrupted.
35 Worker 4 sets neighbor's array component.
36 Worker 4 signals semaphore 4.
37 Worker 4 terminates.
38 Worker 3 sets neighbor's array component.
39 Worker 3 signals semaphore 3.
40 Worker 3 terminates.
41 Worker 2 sets neighbor's array component.
42 Worker 2 signals semaphore 2.
43 Worker 2 terminates.
44 Worker 1 sets neighbor's array component.
45 Worker 1 signals semaphore 1.
46 Worker 1 terminates.
47 Main thread reads 1 from position 1.
48 Main thread waits on semaphore 2.
49 Main thread reads 1 from position 2.
50 Main thread waits on semaphore 3.
51 Main thread reads 1 from position 3.
52 Main thread waits on semaphore 4.
53 Main thread reads 1 from position 4.
54 Main thread waits on semaphore 5.
55 Main thread reads 1 from position 5.
56 Main thread waits on semaphore 6.
57 Main thread reads 1 from position 6.
58 Main thread waits on semaphore 7.
59 Main thread reads 1 from position 7.
60 Main thread waits on semaphore 8.
61 Main thread reads 2 from position 8.
62 Sum = 9
63 System terminates normally.
Here it goes...
Here it comes...
On Sat, 31 Jan 2004, Hotmail wrote:
> hi,
>
> I hope you can help me to solve this puzzle in programming assignment1.
> 1. I think when a thread is interrupted by "yield", it should be put back on
> the ready queue and the next thread who is in head of ready queue will be
> executed, right?
Under Java 1.2.1 scheduling, yes.
> 2. I only observed this for the first "interrupt" and all other interruption
> doesn't follow this.
> ---a) I changed the "Deamon" array to be {0, 4, 4, 4, 6, 4, 6, 4, 6}; so
> that worker 8, 6, 4 will be interrupted.
> ---b) I add one output inside Daemon's interrupt function to help reading:
> public synchronized void interrupt (int tid) {
> if ( d[tid] > 4 )
> {
> System.out.println("worker "+tid+ " is interrupted.");
> Thread.yield ();
> }
> 3) Please see following result:
> ---a) Worker 8 is interrupted and worker 7 is invoked to execute. This is
> ok.
> ---b) Worker 6 is interrupted and instead worker 5 is invoked, worker 6
> continues to "set neighbour's array!!!!
> ---c) Worker 4 is interrupted and instead worker 3 is invoked, worker 4
> continues to "set neighbour's array!!!!
Okay, I now understand your question and I do have the answer.
The behaviour you are observing in your output below is caused by a subtle
mistake in the original coding of the Daemon:
> public synchronized void interrupt (int tid) {
^^^^^^^^^^^^
> if ( d[tid] > 4 )
> {
> System.out.println("worker "+tid+ " is interrupted.");
> Thread.yield ();
> }
By making the interrupt() method synchronized makes it atomic, as we have
learned. Thus, when the worker 8 in your example yields() inside of
interrupt(), it does *not* actually complete interrupt() call yet, until
the Thread.yield() returns. This means, Worker 6 and 4 in your example
will have to wait, even before entering the method, until 8 is done. Not
only that, all the others won't enter it to evaluate if ( d[tid] > 4 ).
Therefore, when 8 is done, the next thread calls this synchronized method
exclusively, and releases it afterwards. So, when it is the turn for
worker 6 to atomically enter interrup(), then yeild(), it has to come back
and finish the interrupt() method since others are waiting for 6 to be
done with it. The same happens with the worker 4.
To get the intended behaviour, comment out 'sychnronized' from
interrupt():
class Daemon { // you are the daemon by what you put
// as the initial values of 'd'
private static int d[] = new int[] {0, 4, 4, 4, 6, 4, 6, 4, 6};
public /*synchronized*/ void interrupt (int tid) {
if ( d[tid] > 4 )
{
System.out.println("worker "+tid+ " is interrupted.");
Thread.yield ();
}
}
}
and then you would get what you are expecting to get:
alpha.mokhov [sam] % javac Future.java
alpha.mokhov [sam] % java Future
Eight workers have started.
Main thread continues work.
Worker 8 begins execution.
Worker 7 begins execution.
Worker 6 begins execution.
Worker 5 begins execution.
Worker 4 begins execution.
Worker 3 begins execution.
Worker 2 begins execution.
Worker 1 begins execution.
Worker 8 doubles his own array component.
worker 8 is interrupted.
Worker 7 doubles his own array component.
Worker 7 sets neighbor's array component.
Worker 7 terminates.
Worker 6 doubles his own array component.
worker 6 is interrupted.
Worker 5 doubles his own array component.
Worker 5 sets neighbor's array component.
Worker 5 terminates.
Worker 4 doubles his own array component.
worker 4 is interrupted.
Worker 3 doubles his own array component.
Worker 3 sets neighbor's array component.
Worker 3 terminates.
Worker 2 doubles his own array component.
Worker 2 sets neighbor's array component.
Worker 2 terminates.
Worker 1 doubles his own array component.
Worker 1 sets neighbor's array component.
Worker 1 terminates.
Worker 8 sets neighbor's array component.
Worker 8 terminates.
Worker 6 sets neighbor's array component.
Worker 6 terminates.
Worker 4 sets neighbor's array component.
Worker 4 terminates.
Main thread reads 2 from position 1.
Main thread reads 2 from position 2.
Main thread reads 1 from position 3.
Main thread reads 2 from position 4.
Main thread reads 1 from position 5.
Main thread reads 2 from position 6.
Main thread reads 1 from position 7.
Main thread reads 2 from position 8.
Sum = 13
System terminates normally.
It is okay for this method to be not-synchronized: it has to be so, and
it's a read-only thing.
Here it goes...
Wonderful!!!
Hi,
A wonderful explanation! And I am totally convinced! Thank you so much for
this perfect explanation! ...
Have a nice day,
B.rgds,
Nick Huang/Qingzhe Huang
Here it goes...
Here it comes...
What should be done is to express the grammar rules in terms of the token
values. The grammar can be preceded by the enumerated typy listing the
token valeus as supplied by the scanner. That's how it is done in some
systems.
--
J. Opatrny
Here it goes...
Hi professor,
Thank you very much for your advice and it seems to be the only way.
By the way, my program found one conflict of grammar for the following:
S -> L := E ; | i ( Lp ) ;
Because of
L -> i Ar
Do you see the two rules of S makes First(S) both contains "i" and this
destroys the LL(1) grammar. Am I right?
Thank you for your attention,
Nick Huang/Qingzhe Huang
Here it comes...
The problem you mention with the grammar can be fixed with
factorization!
--
J. Opatrny
Here it goes...
Hi professor,
Sorry for stupid question. I understand it is common-left-factor problem.
The reason I didn't realized it is because I have already written
alogorithms to remove both common-left-factor and left-recursion. Though I
am confident about my program, however I didn't realize that EVEN the
indirect left-recursion can be removed by algorithms in textbook p159
(Figure4.3 Algorithm fro general left recursion removal) STILL
INDIRECR-COMMON-LEFT-FACTOR cann't be revealed by this algorithm.
You see:
S -> L := E ; | i ( Lp ) ;
L -> i Ar
Suppose L has a higher index value than S, then L won't be replaced by "i
Ar", therefore my function cannot find out this INDIRCT COMMON-LEFT-FACTOR
unless I FORCE index of L to be LOWER than S. Then it becomes something like
this after REMOVAL-LEFT-RECURSION:
S -> i Ar := E ; | i ( Lp ) ;
Then my method of removal common-left-factor can work. And it WORKS NOW.
Thank you so much for your advise,
Nick Huang/Qingzhe Huang
Here it goes...
Re: How can interrupt be disabled and context switch still be possible to happen?
Hi,
Referring to question 4-b of assignment2, suppose process P blocks on semaphore
"delay", then how can P be made later "unblocked" by
other process since "all interrupts are disabled" by instruction
"disable-interrupts"? It seems to me a definite dead lock, right?
So, the only possible, logical, reasonable solution is to have an
"enable-interrupts" at very beginning of "wait" function.
Otherwise, context switch will never happen if procedure P is blocked after
"disable-interrupts". Therefore, logically when
procedure P resumes later, it will definitely find that "interrupt" is
"enabled", otherwise he cannot allow other process to run to
"unblock" him by setting "semaphore delay".
Hope you can confirm this, and many thanks.
Nick Huang/Qingzhe Huang
Here it comes...
Hi,
This has been mentioned in the tutorial. There is *no* deadlock there at all
and the solution is perfect for a uniprocessor system. You are missing important
point: the fact that the interrupts are disabled or enabled is part of each
process' context and is stored in th IR regiester. For simplistic example,
if bit 5 of that register is 1, the IRQ5 is enabled. I'm process A, I have my
copy of IR, you are process B, you have your own copy of IR.
When A is running P(), which is an OS procedure (hence supervisor mode),
and is about to block on wait(), simply wait() "calls" the context switch
routine,
say ctsw(), and does some other stuff, like sem. queue management, etc. So,
there is no
problem here. The OS (ctsw()) stores IR along with its context and loads the new
values
from the context of B. At no reason, wait() should enable interrupts within.
When B makes 'delay' available and it is time to run A, A's context is restored,
along
with IR (hence interrupts are disabled) and wait() now lets it through.
There is no mutual exclusion violation here as well -- two P()'s or two
V()'s cannot run concurrenly and P() only gives up control in a single
point at specific even, when it must do so.
hth,
-s
Here it goes...
hi,
Thanks a lot for your clear explanation. I am almost convinced except that I
always have an impression that "context switch" can only happen by
interrupt. Or in other words, I imagine that context switch is like an
"interrupt-handler" or
"interrupt-service-routine". It never hit on my mind that this kind of
high-priority "interrupt-handler" can be invoked by
any process. You see, in an pre-emptive system, context-switch should be
typically happen at each end of time slice. It is very reasonable that it is
invoked by the "timer interrupt" or whatever. And this context-switch
interrupt, I guess, should have the highest priority. Am I right?
So, my question is just that is it possible for a process to invoke an
interrupt handler like "context-switch"?
Thank you so much.
Nick Huang/Qingzhe Huang
Here it comes...
On Sun, 29 Feb 2004, hotmail wrote:
> hi,
>
> Thanks a lot for your clear explanation. I am almost convinced except
> that I always have an impression that "context switch" can only happen
> by interrupt.
That's a wrong impression. Any blocking system call (e.g. in I/O - read())
would cause a context switch. Otherwise, why would we invent
multiprogramming when there were no interupts?
> Or in other words, I imagine that context switch is like an
> "interrupt-handler" or
> "interrupt-service-routine". It never hit on my mind that this kind of
> high-priority "interrupt-handler" can be invoked by
> any process.
Any OS process. P(), V(), wait(), etc. are provided by the OS and the OS
may choose whatever implementation they want.
> You see, in an pre-emptive system, context-switch should be
> typically happen at each end of time slice.
Not necessarily (between app processes). Every timer interrupt the
scheduler is run that may (and yes, typically, does) context switching and
ready and others queues management.
> It is very reasonable that it is invoked by the "timer interrupt" or
> whatever. And this context-switch interrupt, I guess, should have the
> highest priority. Am I right?
Timer interrupt is the 2nd highest after power failure interrupt and is
unmaskable. Yet, you may terminate or request for I/O before your time
slice expires, hence - a context switch.
> So, my question is just that is it possible for a process to invoke an
> interrupt handler like "context-switch"?
An OS code can theoretically call any other part of the OS code.
Here it comes...
>
hi nick,
> i just got home its around 3 right now. I tried to msg you on msn but i think
your status is on away. Anyway just wanted to ask you if i can call you later
about some questions.
>
>
> Question: "Consider three concurent processes", from the assignment, does
this mean that the processes may run at the same time, but not necessarily start
at the same time? Can two processes start EXACTLY at the same time?
>
>
> Question: does one processes start another process, like if process A comes
to the instructions signal(goB), does it actually start process B or only change
the value for goB from 0 to 1.
>
>
> Question: I think in his notes the teacher defines "wait" as decrementing and
"signal" as incrementing. Do the values inside wait and signal always have to 0
and 1, 0 for blocked and 1 for go?
>
Here it goes...
The professor is ... on programming
public class Cyclic { static int P = 8; // number of workers static int logP = 3; static int Pplus1 = 9; // 0-indexed array static int logPplus1 = 4; // lengths static Daemon daemon = new Daemon (); static Semaphore[][] future = new Semaphore[logP][P]; static int[][] a = new int[logP][P]; static int col; //static ... public static void main (String[] args) { Worker w[] = new Worker[P]; for (int k=0; k<logP; k++) { future[k]=new Semaphore[P]; //for( int j=0; j<e.length; j++ ) for (int j=0; j<P; j++) { future[k][j] = new Semaphore(0); } } for( int j=0; j<P; j++ ) { //test //System.out.println("test signal here"); future[0][3].Signal(); col = daemon.column (j); w[col] = new Worker (col); w[col].start (); } for( int j=0; j<P; j++ ) { // wait for workers try { w[j].join (); } // to terminate catch (InterruptedException exc) { }; } for( int j=1; j<Pplus1; j++ ) System.out.println ("Sum " + j + " = "+ a[logP-1][j-1]); System.out.println ("System terminates normally."); } } class Daemon { private static int[] w = new int[] {3, 6, 5, 7, 4, 2, 1, 0}; private static int[] d = new int[] {0, 7, 4, 5, 4, 4, 9, 4, 1}; public int column (int j) { return w[j]; } public void interrupt (int j) { if ( d[j] > 4 ) Thread.yield (); } } class Semaphore { private int value; Semaphore (int value1) { value = value1; } public synchronized void Wait () { while( value <= 0 ) { try { wait (); } catch (InterruptedException e) { }; } value--; } public synchronized void Signal () { ++value; notify (); } } class Worker extends Thread { private int j; private int sum; private int hop = 1; Worker (int col) { sum = col+1; j = col; } public void run () { System.out.println ("Worker " + (j) + " begins execution."); yield (); for (int level=0; level<Cyclic.logP; level++) { // if (j <= (Cyclic.P-hop)) { System.out.println ("Worker " + j + " defines a[" + (level) + "," + j +"]."); Cyclic.a[level][j] = sum; //System.out.println("a["+level+"]["+j+"]="+Cyclic.a[level][j]); //System.out.println("exactly here before signal "+j); Cyclic.future[level][j].Signal(); //System.out.println("exactly after signal of "+j); // ... } if (j >= hop) { // ... //test //System.out.println("before wait of "+j); Cyclic.future[level][j-hop].Wait(); System.out.println ("Worker " + j + " uses a[" + (level) + "," + (j-hop) +"]."); sum += Cyclic.a[level][j-hop]; System.out.println("worker "+(j)+" has sum of "+ sum); } //test //System.out.println("before interrupt"); Cyclic.daemon.interrupt (j); //test //System.out.println("after interrupt"); hop = 2*hop; } Cyclic.a[Cyclic.logP-1][j]=sum; System.out.println("Worker " + j + " terminates."); } }
The following is the running result:
[qingz_hu@alpha PA2] % java Cyclic
Worker 3 begins execution.
Worker 6 begins execution.
Worker 5 begins execution.
Worker 7 begins execution.
Worker 4 begins execution.
Worker 2 begins execution.
Worker 1 begins execution.
Worker 0 begins execution.
Worker 3 defines a[0,3].
Worker 6 defines a[0,6].
Worker 5 defines a[0,5].
Worker 7 defines a[0,7].
Worker 7 uses a[0,6].
worker 7 has sum of 15
Worker 7 defines a[1,7].
Worker 4 defines a[0,4].
Worker 4 uses a[0,3].
worker 4 has sum of 9
Worker 4 defines a[1,4].
Worker 2 defines a[0,2].
Worker 1 defines a[0,1].
Worker 0 defines a[0,0].
Worker 0 defines a[1,0].
Worker 0 defines a[2,0].
Worker 0 terminates.
Worker 6 uses a[0,5].
worker 6 has sum of 13
Worker 5 uses a[0,4].
worker 5 has sum of 11
Worker 5 defines a[1,5].
Worker 3 uses a[0,2].
worker 3 has sum of 7
Worker 2 uses a[0,1].
worker 2 has sum of 5
Worker 2 defines a[1,2].
Worker 2 uses a[1,0].
worker 2 has sum of 6
Worker 2 defines a[2,2].
Worker 2 terminates.
Worker 1 uses a[0,0].
worker 1 has sum of 3
Worker 6 defines a[1,6].
Worker 6 uses a[1,4].
worker 6 has sum of 22
Worker 7 uses a[1,5].
worker 7 has sum of 26
Worker 7 defines a[2,7].
Worker 3 defines a[1,3].
Worker 4 uses a[1,2].
worker 4 has sum of 14
Worker 4 defines a[2,4].
Worker 4 uses a[2,0].
worker 4 has sum of 15
Worker 4 terminates.
Worker 1 defines a[1,1].
Worker 6 defines a[2,6].
Worker 6 uses a[2,2].
worker 6 has sum of 28
Worker 5 uses a[1,3].
worker 5 has sum of 18
Worker 5 defines a[2,5].
Worker 3 uses a[1,1].
worker 3 has sum of 10
Worker 1 defines a[2,1].
Worker 6 terminates.
Worker 3 defines a[2,3].
Worker 5 uses a[2,1].
worker 5 has sum of 21
Worker 5 terminates.
Worker 1 terminates.
Worker 7 uses a[2,3].
worker 7 has sum of 36
Worker 7 terminates.
Worker 3 terminates.
Sum 1 = 1
Sum 2 = 3
Sum 3 = 6
Sum 4 = 10
Sum 5 = 15
Sum 6 = 21
Sum 7 = 28
Sum 8 = 36
System terminates normally.
Is algorithms in PA2 more efficient than that in PA1?
Hi, Maybe I am still not quite clear about the algorithms of programming assignment2 because I am still not sure about if the way PA2 is better than what we did in PA1. See below, is it correct about PA2? That is each column will calculate its "sum" with "previous-hop-column" (sum+=a[level][j-hop]). You can see the total sum operation is FIVE. col0 col1 col2 col3 level 0 a0 a1 a2 a3 level 1 a0 a01=a0+a1 a12=a1+a2 a23=a2+a3 level 2 a0 a01 a012=a0+a12 a0123=a01+a23 And please see the following which is the PA1 model which only requires THREE sum operations. I agree that in multi- processor-situation PA2 has some advantages such that each thread don't have to wait each other at each LEVEL. But the total number of sum operation is also a factor when considering the efficiency of algorithms. col0 col1 col2 col3 level 0 a0 a1 a2 a3 level 1 a0 a01=a0+a1 a2 a3 level 2 a0 a01 a012=a01+a2 a3 level 3 a0 a01 a012 a0123=a012+a3 I have no idea of parallel computation, so what I am thinking maybe garbage, however, if you can justify for me I would be grateful: PA2 has total LEVEL of computations of Log(N) where N stands for number of column or threads. But the total sum operation is: Assume N=2^k (2^k means 2 to the power of k) Sum(N) = (N-2^0) + (N-2^1) + (N- 2^2) ...+ (N- 2^(k-1))==N*k - ( 2^k - 1 ) In case of N=8, Sum(8) = 8*3 - (2^3 - 1)=17 PA1 has total LEVEL of computation of N-1 and the total sum operation is N-1. So, the time complexity of PA2 is O(N*Log(N)), while PA1 is always O(N-1) or O(N). I am not sure which algorithm is better under multi-processor-situation, but it seems to me that in PA2 still some column need to wait previous-hop-column to be finished first. Therefore, the benifit of PA2 is even less. Thank you for your time, Nick Huang/Qingzhe Huang
Here it comes...
> Hi Huang,
>
> Consider that you are running this algorithm on a parallel computer
> where all instructions are executing in synchronous manner (there is
> a model for such parallel computers and it is called PRAM model).
> The given algorithm can run in O(lg(n)) time, where you algorithm
> needs a time of O(n).So the given algorithm takes less time than the
> second one. In fact, the secone one is equivalent to the sequential
> algorithm. This is a very important factor, if buying processor is
> inexpensive than buying time.
>
> Given, I am not very good at math, the number of additions in the
> first algorithm is O(nlg(n)) and for the second one is O(n). If my
> calculation is correct, the first algorithm is not a 'cost optimal'
> parallel algorithm. But as I mentioned, sometimes 'time' costs more
> money.
>
> I hope you got the point.
>
> Good luck,
> Akon.
Here it goes...
Name: Qingzhe Huang ID: 5037735
COMP 346 Assignment 3 Winter 2004
Issued: Week of March 15 Due: Week of March 29
All questions have equal weight.
1.
Answer: Assuming the paged memory access means that we fetch a data or an instruction from main memory.
a) Though the page table is permanently in main memory, the frame pointed by page table entry may not be in memory. That is the frame may be swapped out to second store. Suppose this is not the case, a memory access includes memory access to page table to fetch the entry in page table, then second memory access would fetch the data. The total time is 2x200ns.
If the frame is not in main memory then a page fault will happen and all that page fault penalty have to be added into total access time.
As for the question “acceptable”, I think it is not acceptable because usually a CPU execution cycle is much less than memory access time, maybe only a few ns which is much less than 200ns. Every instruction cycle, CPU needs fetch instruction which involves two memory access, then operand fetch which involves another two memory access and maybe operand store so on. So, for most of instruction cycle, CPU are always waiting for memory access. This is of course not acceptable.
b) Assume TLB fault is essentially a TLB miss. The difference of TLB fault between page fault is that TLB fault is very fast. Because TLB is close to CPU and implemented with hardware so that it is not like page fault which is essentially a software interrupt. TLB fault happens when the target is not found in TLB, it is purely a miss. However, page fault happens only after a TLB fault because TLB is always searched first. It happens when target frame is not stored in main memory. Therefore a software interrupt or trap is generated to request O.S. to swap in requested frame from second store.
It is possible to have 0 fault. That is the target address is found in TLB.
It is possible to have 1 fault. That is the target is not found in TLB and TLB fault is generated. Then system continues to search in page table and find the target frame address.
It is possible to have 2 faults. That is when TLB fault happens, and searching in page table also results in a page fault, that is the target frame is not in main memory.
c) when n=512: (Assume that target frame is always in main memory. So no swapping needed.)
Effective memory access time= 200(0.95 * (1-512/2560)) + (200*2)*(1- 0.95 * (1-512/2560))
=152 + 96 = 248
when n=1024
EAT = 200*(0.95*(1-1024/2560)) + (200*2)*(1-0.95*(1-1024/2560))=114+172=286
2.
Answer: In order to share the segment between two processes, the segment table of two process should each have one entry in its segment table to point to same physical memory address. For example, process p1 has a segment table entry i pointing to physical address x and p2 in its each segment table entry j pointing to physical address x. Therefore both processes share the segment.
If the shared segment has code referring to address in that segment, usually the both processes should have the same name for that shared segment. Because the shared code would be executed both in two share processes and shared code is referring to that particular segment. Of course the “particular reference of segment” would become part of code of each process. This means in each process the segment name is referring some segment in that process. For example, the share code is referring name of that shared segment as “sharedSeg” then in both process “sharedSeg” is referring to that shared segment. It is for sure that the name “sharedSeg” must appear in both segment table of two process.
However, this assumption is that the reference of address is in “direct addressing mode”. If the reference is done by “indirect addressing mode”, say the offset from the beginning of that segment, then the name of shared segment would not be necessarily same.
Usually the shared segment has a similar purpose in two processes, so naturally the protection status should be same. But it is not necessarily same. For example, the segment shared is purely data and one process want to write in it whereas another process simply wants to read it. In this case the protection status is different from two processes.
3.
Answer:
1 |
2 |
3 |
4 |
2 |
1 |
5 |
6 |
2 |
1 |
2 |
3 |
7 |
6 |
3 |
2 |
1 |
2 |
3 |
6. |
LRU with 3 frame: the page fault number is 15
1 |
1 |
1 |
4 |
4 |
4 |
5 |
5 |
5 |
1 |
1 |
1 |
7 |
7 |
7 |
2 |
2 |
2 |
2 |
2 |
|
2 |
2 |
2 |
2 |
2 |
2 |
6 |
6 |
6 |
6 |
3 |
3 |
3 |
3 |
3 |
3 |
3 |
3 |
3 |
|
|
3 |
3 |
3 |
1 |
1 |
1 |
2 |
2 |
2 |
2 |
2 |
6 |
6 |
6 |
1 |
1 |
1 |
6 |
LRU with 4 frames: the number of page fault is 11
1 |
1 |
1 |
1 |
1 |
1 |
5 |
5 |
5 |
5 |
5 |
3 |
3 |
3 |
3 |
3 |
3 |
3 |
3 |
3 |
|
2 |
2 |
2 |
2 |
2 |
2 |
6 |
6 |
6 |
6 |
6 |
7 |
7 |
7 |
7 |
1 |
1 |
1 |
1 |
|
|
3 |
3 |
3 |
3 |
3 |
3 |
2 |
2 |
2 |
2 |
2 |
2 |
2 |
2 |
2 |
2 |
2 |
2 |
|
|
|
4 |
4 |
4 |
4 |
4 |
4 |
1 |
1 |
1 |
1 |
6 |
6 |
6 |
6 |
6 |
6 |
6 |
LRU with 5 frames: 9 page faults
1 |
1 |
1 |
1 |
1 |
1 |
1 |
6 |
6 |
6 |
6 |
6 |
6 |
6 |
6 |
6 |
6 |
6 |
6 |
6 |
|
2 |
2 |
2 |
2 |
2 |
2 |
2 |
2 |
2 |
2 |
2 |
2 |
2 |
2 |
2 |
2 |
2 |
2 |
2 |
|
|
3 |
3 |
3 |
3 |
3 |
3 |
3 |
1 |
1 |
1 |
1 |
1 |
1 |
1 |
1 |
1 |
1 |
1 |
|
|
|
4 |
4 |
4 |
4 |
4 |
4 |
4 |
4 |
3 |
3 |
3 |
3 |
3 |
3 |
3 |
3 |
3 |
|
|
|
|
|
|
5 |
5 |
5 |
5 |
5 |
5 |
7 |
7 |
7 |
7 |
7 |
7 |
7 |
7 |
1 |
2 |
3 |
4 |
2 |
1 |
5 |
6 |
2 |
1 |
2 |
3 |
7 |
6 |
3 |
2 |
1 |
2 |
3 |
6. |
FIFO with 3 frames: 16 page faults.
1 |
1 |
1 |
4 |
4 |
4 |
4 |
6 |
6 |
6 |
6 |
3 |
3 |
3 |
3 |
2 |
2 |
2 |
2 |
6 |
|
2 |
2 |
2 |
2 |
1 |
1 |
1 |
2 |
2 |
2 |
2 |
7 |
7 |
7 |
7 |
1 |
1 |
1 |
1 |
|
|
3 |
3 |
3 |
3 |
5 |
5 |
5 |
1 |
1 |
1 |
1 |
6 |
6 |
6 |
6 |
6 |
3 |
3 |
FIFO with 4 frames: 12 page faults
1 |
1 |
1 |
1 |
1 |
1 |
5 |
5 |
5 |
5 |
5 |
3 |
3 |
3 |
3 |
3 |
1 |
1 |
1 |
1 |
|
2 |
2 |
2 |
2 |
2 |
2 |
6 |
6 |
6 |
6 |
6 |
7 |
7 |
7 |
7 |
7 |
7 |
3 |
3 |
|
|
3 |
3 |
3 |
3 |
3 |
3 |
2 |
2 |
2 |
2 |
2 |
6 |
6 |
6 |
6 |
6 |
6 |
6 |
|
|
|
4 |
4 |
4 |
4 |
4 |
4 |
1 |
1 |
1 |
1 |
1 |
1 |
2 |
2 |
2 |
2 |
2 |
FIFO with 5 frames: 11page faults
1
|
1
|
1
|
1
|
1
|
1
|
1
|
6
|
6
|
6
|
6
|
6
|
6
|
6
|
6
|
6
|
6
|
6
|
6
|
6
|
|
2
|
2
|
2
|
2
|
2
|
2
|
2
|
2
|
1
|
1
|
1
|
1
|
1
|
1
|
1
|
1
|
1
|
1
|
1
|
|
|
3
|
3
|
3
|
3
|
3
|
3
|
3
|
3
|
2
|
2
|
2
|
2
|
2
|
2
|
2
|
2
|
2
|
2
|
|
|
|
4
|
4
|
4
|
4
|
4
|
4
|
4
|
4
|
3
|
3
|
3
|
3
|
3
|
3
|
3
|
3
|
3
|
|
|
|
|
|
|
5
|
5
|
5
|
5
|
5
|
5
|
7
|
7
|
7
|
7
|
7
|
7
|
7
|
7
|
1 |
2 |
3 |
4 |
2 |
1 |
5 |
6 |
2 |
1 |
2 |
3 |
7 |
6 |
3 |
2 |
1 |
2 |
3 |
6. |
Optimal with 3 frames: 11 page faults
1 |
1 |
1 |
1 |
1 |
1 |
1 |
1 |
1 |
1 |
1 |
3 |
3 |
3 |
3 |
3 |
3 |
3 |
3 |
3 |
|
2 |
2 |
2 |
2 |
2 |
2 |
2 |
2 |
2 |
2 |
2 |
7 |
7 |
7 |
2 |
2 |
2 |
2 |
2 |
|
|
3 |
4 |
4 |
4 |
5 |
6 |
6 |
6 |
6 |
6 |
6 |
6 |
6 |
6 |
1 |
1 |
1 |
6 |
Optimal with 4 frames: 8 page faults
1 |
1 |
1 |
1 |
1 |
1 |
1 |
1 |
1 |
1 |
1 |
1 |
7 |
7 |
7 |
7 |
1 |
1 |
1 |
1 |
|
2 |
2 |
2 |
2 |
2 |
2 |
2 |
2 |
2 |
2 |
2 |
2 |
2 |
2 |
2 |
2 |
2 |
2 |
2 |
|
|
3 |
3 |
3 |
3 |
3 |
3 |
3 |
3 |
3 |
3 |
3 |
3 |
3 |
3 |
3 |
3 |
3 |
3 |
|
|
|
4 |
4 |
4 |
5 |
6 |
6 |
6 |
6 |
6 |
6 |
6 |
6 |
6 |
6 |
6 |
6 |
6 |
Optimal with 5 frames: 7 page faults
1 |
1 |
1 |
1 |
1 |
1 |
1 |
1 |
1 |
1 |
1 |
1 |
1 |
1 |
1 |
1 |
1 |
1 |
1 |
1 |
|
2 |
2 |
2 |
2 |
2 |
2 |
2 |
2 |
2 |
2 |
2 |
2 |
2 |
2 |
2 |
2 |
2 |
2 |
2 |
|
|
3 |
3 |
3 |
3 |
3 |
3 |
3 |
3 |
3 |
3 |
3 |
3 |
3 |
3 |
3 |
3 |
3 |
3 |
|
|
|
4 |
4 |
4 |
4 |
6 |
6 |
6 |
6 |
6 |
6 |
6 |
6 |
6 |
6 |
6 |
6 |
6 |
|
|
|
|
|
|
5 |
5 |
5 |
5 |
5 |
5 |
7 |
7 |
7 |
7 |
7 |
7 |
7 |
7 |
4.
Answer: In the previous question, optimal page-replacement has the smallest number of page faults compared with
FIFO and LRU algorithms in all 3,4 and 5 frames situations. Therefore it is the best efficient algorithms since it has
Least page faults overhead.
5.
Answer:
a) Generally speaking , pre-emptive scheduling means a process can be interrupted. And non-preemptive means the process
Can not be interrupted while running.
More precisely, when the scheduling only happens when process switches from running state to waiting state (i.e. blocked by I/O request) or the process terminates, then
it is called non-preemptive scheduling and all others are called preemptive scheduling.
As a computer system includes interactive subsystem, the response time is more important than CPU utilization. Therefore when one user input or try to execute some
program, he expect the system to respond quickly or reasonably fast. The system needs to be able to interrupt other running process to all a quick response to user’s action.
While non-preemptive can not allow a running process be interrupt to react to user’s action. So, usually strict non-preemptive is not suitable for interactive system.
b) Assume CPU burst means the process execution burst or in other words the summation of computation time of a process until it encounter the next I/O block.
Then the answer is No, because CPU burst termination is dependent on the nature of execution program and its burst terminates only when it finishes its computation or it is blocked at I/O. For example, a program searching large prime numbers involves itself with CPU computation for most of time, it output result occasionally. Therefore its CPU burst is typically long and I/O burst is very short. If it is run on “time-share” system although it is only allocated with many small quantum and the total sum of quantum remains the same.
The phrase “at same time” really makes no sense since at very same moment there can always only one process running if the system has only one CPU. Therefore “at same time” there is no way to have two bursts from any process no matter it is in ready queue or in semaphore queue!
6.
Answer:
a)
FCFS:
b1
|
b2
|
b3
|
b4
|
b5
|
|
|
|
|
|
0 10 11 13 14 19
the waiting time is like following:
b1 |
b2 |
b3 |
b4 |
b5 |
0
|
10
|
11
|
13
|
14
|
nonpremptive SJF
B2
|
B4
|
B3
|
B5
|
B1
|
|||
|
|
|
|
|
|
|
|
0 1 2 4 9 19
the waiting time is like following:
b1 |
b2 |
b3 |
b4 |
b5 |
9
|
0
|
2
|
1
|
4
|
- nonpremptive priority
B2 |
B5 |
B1 |
|
B3 |
B4 |
|
|
|
|
|
0 1 6 16 18 19
the waiting time is like following:
b1 |
b2 |
b3 |
b4 |
b5 |
6
|
0
|
16
|
18
|
1
|
- pure round robin with quantum = 1
b1
|
b2
|
b3
|
b4
|
b5
|
b1
|
b3
|
b5
|
b1
|
b5
|
b1
|
b5
|
b1
|
b5
|
b1
|
b1
|
b1
|
b1
|
b1
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
the waiting time is like following:
b1 |
b2 |
b3 |
b4 |
b5 |
9
|
1
|
5
|
3
|
9
|
7.
Answer: Suppose originally some processes may run more than half of its quantum, then these old processes will have a lower priority compared with newly-arrived processes as the newly-arrived process will always be picked since they only run half of their quantum. Therefore the original processes consuming more than half their quantum would be starved.
8.
Answer:
a) T/(T+S) this is maximum of productivity of CPU utilization
b) T/(T+S)
c) Q/(Q+S)
d) Q/(Q+S) = 1/2
e) 0
第三题的不同点:
1. 你的"LRU with 4 frames: the number of page fault is 11"
我的"LRU, when four frames ,Page faults occur 10 times"
2. 你的"FIFO with 4 frames: 12 page faults"
我的"FIFO, when four frames, Page faults occur 14 times"
3. 你的"FIFO with 5 frames: 11page faults"
我的"FIFO, when five frames, Page faults occur 10 times."
附件是Word文件,Office XP下做的,不应该打不开啊??再发一遍
Here it goes...
So, the correct answer should be:
LRU with 4 frames: the number of page fault is 10
1 |
1 |
1 |
1 |
1 |
1 |
1 |
1 |
1 |
1 |
1 |
1 |
1 |
6 |
6 |
6 |
6 |
6 |
6 |
6 |
|
2 |
2 |
2 |
2 |
2 |
2 |
2 |
2 |
2 |
2 |
2 |
2 |
2 |
2 |
2 |
2 |
2 |
2 |
2 |
|
|
3 |
3 |
3 |
3 |
5 |
5 |
5 |
5 |
5 |
3 |
3 |
3 |
3 |
3 |
3 |
3 |
3 |
3 |
|
|
|
4 |
4 |
4 |
4 |
6 |
6 |
6 |
6 |
6 |
7 |
7 |
7 |
7 |
1 |
1 |
1 |
1 |
LRU with 5 frames: 8 page faults
1
|
1
|
1
|
1
|
1
|
1
|
1
|
1
|
1
|
1
|
1
|
1
|
1
|
1
|
1
|
1
|
1
|
1
|
1
|
1
|
|
2
|
2
|
2
|
2
|
2
|
2
|
2
|
2
|
2
|
2
|
2
|
2
|
2
|
2
|
2
|
2
|
2
|
2
|
2
|
|
|
3
|
3
|
3
|
3
|
3
|
6
|
6
|
6
|
6
|
6
|
6
|
6
|
6
|
6
|
6
|
6
|
6
|
6
|
|
|
|
4
|
4
|
4
|
4
|
4
|
4
|
4
|
4
|
3
|
3
|
3
|
3
|
3
|
3
|
3
|
3
|
3
|
|
|
|
|
5
|
5
|
5
|
5
|
5
|
5
|
5
|
5
|
7
|
7
|
7
|
7
|
7
|
7
|
7
|
7
|
FIFO with 4 frames: 14 page faults
1 |
1 |
1 |
1 |
1 |
1 |
5 |
5 |
5 |
5 |
5 |
3 |
3 |
3 |
3 |
3 |
1 |
1 |
1 |
1 |
|
2 |
2 |
2 |
2 |
2 |
2 |
6 |
6 |
6 |
6 |
6 |
7 |
7 |
7 |
7 |
7 |
7 |
3 |
3 |
|
|
3 |
3 |
3 |
3 |
3 |
3 |
2 |
2 |
2 |
2 |
2 |
6 |
6 |
6 |
6 |
6 |
6 |
6 |
|
|
|
4 |
4 |
4 |
4 |
4 |
4 |
1 |
1 |
1 |
1 |
1 |
1 |
2 |
2 |
2 |
2 |
2 |
FIFO with 5 frames: 10page faults
1
|
1
|
1
|
1
|
1
|
1
|
1
|
6
|
6
|
6
|
6
|
6
|
6
|
6
|
6
|
6
|
6
|
6
|
6
|
6
|
|
2
|
2
|
2
|
2
|
2
|
2
|
2
|
2
|
1
|
1
|
1
|
1
|
1
|
1
|
1
|
1
|
1
|
1
|
1
|
|
|
3
|
3
|
3
|
3
|
3
|
3
|
3
|
3
|
2
|
2
|
2
|
2
|
2
|
2
|
2
|
2
|
2
|
2
|
|
|
|
4
|
4
|
4
|
4
|
4
|
4
|
4
|
4
|
3
|
3
|
3
|
3
|
3
|
3
|
3
|
3
|
3
|
|
|
|
|
|
|
5
|
5
|
5
|
5
|
5
|
5
|
7
|
7
|
7
|
7
|
7
|
7
|
7
|
7
|
Here it comes...
> Hi professor,
>
> I have a few questions and just want to ask them in tomorrow's lecture. In
> case I forget or cannot express myself clear, I send them now. Hope you can
> answer them in class if you don't have time to reply. Thank you very much.
>
> 1. After one statement, I am supposed to re-use the temporaries. But the
> compound statement has the problems:
> i.e. if C then S else S ;
> The "S" after "then" maybe a compound statement itself, and temporaries used
> in "C" maybe overlapped by temporaries in "S"
>
> statements. Then my question is how to reuse temporaries? Are they reused at
> statement level or module level?
Most common approach is to have some temporaries at each module call,
they would be a part of a frame of a module call. They can be reused after
a simple statement.
>
> 2. I remembered in class you mentioned that "three-address-code" can have
> immediate values with special sign like "=". So,
>
> I used a temporary to represent immediate value and also print out with
> immediate value. However, can we allow more than
>
> one immediate value temporary in same quadruple? For example, is it legal for
> (add, 3, 4, @t4) where "3" and "4" are all
>
> temporaries and has immediate value of 3 and 4 accordingly?
>
Yes, that's fine
> 3. This question actually goes before no. 2. Should I use temporary to
> represent immediate value? I mean in following
>
> example:
> a := b+4
> Should I generate a temporary @t0 with immediate value equal to 4 and then
> form a quadruple:
> (add, b, @t0, a)
You don't have to immediate values can be used.
Remember that the intermemediate code, unlike machine code
is not something very rigid.
> Is this the correct way? Or should we use immediate value "4" directly in
> quadruple?
>
> 4. The grammar does not allow a "prototype" of module to be declared.
> Therefore the indirect recursion is not allowed. That
>
> is in body of module "first" we cannot call module "second" if module "first"
> is declared before module "second".
> i.e.
> module first()
> begin
> second();
> end;
> module second()
> begin
> first();
> end;
> So, when we discuss about recursion to be allowed or not, we are not talking
> about this recursion, right?
prototyping is one way of handling the problem, another possibility
is to go back to where a module is used befor eits declaration
and update the code. The prototyping is used to avoid it.
>
> B.regards,
>
> Nick Huang/Qingzhe Huang
COMP 346 Assignment 4 Winter 2004
Issued: Week of March 29 Due: Week of April 5
Name: Qingzhe huang ID: 5037735
All questions have equal weight. Do the following problems from our book
"Applied Operating Systems Concepts". (Page numbers from 2002 edition).
1. Chapter 8, "Deadlocks", p. 268, problem 8.8.
"Consider a system consisting of four resources ...
Answer:
The worst case is that each process has held one resource instance and then
one of three processes request another resource instance. As its maximum
resource request is just two, and there is still one instance free for it. So,
the process will be satisfied with the one instance, that is, it gets two
resource instances in total. Then it will finish and release the two resources
it holds. Then the system will have two more resource in stances free. And it is
obvious that the rest two processes will be easily get whatever they want which
means there is no resource shortage and there is no deadlock at all.
2. Chapter 8, "Deadlocks", p. 268, problem 8.9.
"Consider a system consisting of 'm' resources ...
Answer:
Suppose the system is in deadlock, then it implies that all m resources have been allocated to n processes, in other words AllAllocated=m. But by “condition b” we know the sum of all maximum need is less than m+n, or AllMaximuNeed<m+n. By formula of AllMaximumNeed=AllNeeded+AllAllocated, we know AllNeeded<n.
This means that the sum of needed resources of all n processes is less than n. By pigeon-hole principle there exists at least one process such that its need is equal to 0.
However, according to “condition a” we know that the maximum need of each
process is at least 1 which is equal to sum of its allocation and its needed, or
MaximumNeed=Allocated+Needed. Then this process does not request any resource
and can release one resource. This implies there is no deadlock which is
contradictive to our assumption. Then the system will not in deadlock.
3. Chapter 11, "File-System Interface", p. 408, problem 11.6.
"Could you simulate a multilevel directory ...
Answer:
a) If the directory name can be arbitrarily long, then we can simulate multi-level directory with single level directory by following example.
Like the naming principle in Java library, use all their original name to represent root directories and files. For each sub directories and files, add a “delimiter” after its parent directory name and then add its own name after the “delimiter”. So does third level files and directories. The delimiter can be any unique symbols. For example: “/root/java/program2/ass2.java” will be represented as
“root~java~program2~ass2.java”.
b) If the file name is restricted to maximum 7 letters, then there is no way
to represent all multi-level directories. Because the maximum permutation
choices limit us with limited file names.
4. Chapter 11, "File-System Interface", p. 408, problem 11.7.
"Explain the purpose of the 'open' and 'close' operation ...
Answer:
Most file operations involves searching file entry in directory. In order to avoid this constant searching many system requires a “open” operation be explicitly made before operating on files so that the file entry in directory will be copied to an “open-file-table” for convenience of further operations. And “open” operation also accept the “operation mode”, such as “read”, “write” etc. Then the access rights will be checked by system.
Since file operation would consume many system resources such as buffer
space, entry in “open-file-table”, it is quite necessary to release all
resources when no more file operation is needed. This requests “close” operation
is executed explicitly. In multi-user system, this close operation will decrease
the counter number in “open-file-table”. When the counter reaches zero it means
that no user need the file to be accessed any more. Then the file can be
physically closed and all associated system resources can be released to be
reused.
5. Chapter 11, "File-System Interface", p. 408, problem 11.12.
"Consider a system that supports ...
Answer:
a) Create a group of the 4990 users and allow the read permission for the group. Then associate the group with the file and make sure the universe doesn’t allow read permission. So, the 4990 user can access the file and the 10 other user will be considered as universe rights which has no read rights.
b) If we use access-control-list, it will become more efficient. Set the file
generally be accessible by all user. And in the access-control-list specifies
the 10 user names and their permission.
6. Chapter 18, "Protection", p. 656, problem 18.12.
"Why is it difficult to protect ...
Answer:
Because if users are allowed to do their own I/O then they should be also
running in monitor mode which enables them with the privilege to access I/O.
This will have potential protection loophole such that ill-willed user may try
to access some restricted resources that he is not supposed to.
7. Chapter 18, "Protection", p. 656, problem 18.13.
"Capability lists are usually kept ...
There are two ways to prevent users to access capability object and modify them.
1. All objects include capability is associated with a tag which indicates whether it is data which is accessible to user or it is an capability object which is not accessible to user. Some hardware mechanism are also needed to ensure this.
2. Some system split user memory address space into two parts such that one part is containing user data and code etc. The other part is only accessible to O.S. and contains the capability objects.
The following is what my classmate sent me. It is said to be the old final from
P.P. and I can recognize it by
the ambiguous English sentences. What a typical style!!!
here is the full question: ---------------------------------------------- The standard four-semaphore array based solution to the producer/consumer problem is shown below.We assume multiple producers
and multiple consumers. -------------------------------------- |*|*|*| | | | | | | |*|*|*|*| 0-indexed [0..n - 1] ------------------------------------- ^ --> ^ --> | | rear front (last filled) (last emptied) 'procedure' deposit (m : mess) = 'procedure' remove ('var' m : mess) = 'begin' 'begin' wait (slot) wait (message) wait (sender) wait (receiver) rear <-- (rear + 1) [n] front <-- (front + 1) [n] buf[rear] <-- m m <-- buf[front] signal (sender) signal (receiver) signal (message) signal (slot) 'end' 'end'
We have: init[slot] = n
init[sender] = init[receiver] = 1 front = rear = 0
init[message] = 0
With multiprocessors, there is an intrest in maximizing concurrency. We say that a P/C buffer has an access concurrency of m, if m processes can be reading and/or writing it at the same time (assuming we have a large multiprocessor)
The instruction *atomic pre-increment* (API), denoted <++j>, atomically increments a shared integer j and returns the incremented value. Sample use: k<--<++j>.
The goal os to maximize the access concurrency of the P/C buffer so that multiple 'deposits' and 'removes' can execute at the same time. Each 'deposit' must place a message into a different empty slot. Each 'remove' must extract a message from a different full slot.
a) [2 marks] What is the maximum access concurrency achievable in principle for this problem?
b) [7 marks] Develop a new solution to this problem that achieves maximum concurrency. Use semaphore,API and any local variables you need.
c) [7 marks] Did you destroy the FIFO property in your new solutuion? Is there a logical conflict between high access concurency and FIFO-ness?Explain.
d) If the buffer (a FIFO queue) were replaced by a stack, what access concurrency might we hope to achieve in principle?Explain.
solution:
a) It should be n.
b) Declare two group of semaphores D[n], R[n] for "deposit" and "remove" respectively. And initialize every semaphore of
D[n] to be 1; Initialize every semaphore of R[n] to be 0; (Because at beginning, we can deposit any slot and no slot can be
removed.)
declare global variable integer i=0,j=0;
procedure deposit(m: mess)
var index: integer; //local variable
begin
wait(D[i]);
index<-- i;//as i would be increment next
i<--(++i)%n; //increment i to allow other process to deposit
buf[index]<--m;
signal(R[index]); //to allow remove process to proceed
end;
procedure remove (var m: mess)
var index: integer; //local variable
begin
wait(R[j]);
index<--j;
j<--(++j)%n;//to increment j and modulo n to point to next element of semaphore.
m<--buf[index];
signal(D[index]);
end;
c) Yes, I destroy the FIFO as multi-access is not sequetial. As deposit and remove procedure are both done by multi-processes.
And they are done whatever the slot is able to be "deposit or remove" which is indicated by arrays of semaphores.
d) I don't understand the question.
Here it comes...
NH: solution: NH: NH: a) It should be n. a) It's only 2. 'n' is what we ideally want for the max concurrency, but it is not the case. Only one producer and one consumer (1+1=2) at most can access the buffer. All others are locked up while these two are active. b) I think your solution is a bit of an overkill (in terms of amount of semaphores used). I did not think much, the below seems to work. The idea is, declare local variables f and r, a valid copy of the front and read at the time, and narrow down the CS to only to the reading of the front and read, just like your index variable does. 'procedure' deposit (m : mess) = 'procedure' remove ('var' m : mess) = 'begin' 'begin' local r local f wait (slot) wait (message) wait (sender) wait (receiver) r <-- (rear++)%[n] f <-- (front++)%[n] signal (sender) signal (receiver) buf[r] <-- m m <-- buf[f] signal (message) signal (slot) 'end' 'end' Just by taking out bufer read/write out of the sender/receiver CS we achieve the desired level of max concurrency without any harm, the CS only needed for atomic update and read of rear and front. But since he says ++ is atomic already, we could rip off the CS semaphores entirely, just keep the counters. 'procedure' deposit (m : mess) = 'procedure' remove ('var' m : mess) = 'begin' 'begin' local r local f wait (slot) wait (message) r <-- (rear++)%[n] f <-- (front++)%[n] buf[r] <-- m m <-- buf[f] signal (message) signal (slot) 'end' 'end' c) The above does not really destroy FIFOness in a sence the front and rear pointers are still sequential, and the end reusult he messages are still filled in and read out according to these pointers, yet they may do so at the same time, hence there's a logical conflict between the FIFO property of managing the buffer and access concurrency. I don't say your solution is incorrect, it just used many semaphores. In your case it looks like you lost FIFOness, but in reality every process just reserve the index before depositing atomically, and the resevation is sequential. The deposits and the removals are fully concurrent, however. d) Is either 1 or infinity. Since all the resources are the same, it doesn't matter who's depositing and who's removing, but the top update should be atomic (hence 1). So, the access concurrency could be limited by the stack size for consumers and memory size (max stack size) for producers. That's what I think.
Here it goes...
hi, Thanks so much for your prompt reply!! And I really appreciate your detailed explanations! a) I still don't see the reason of "only 2" processes. Say, at a senario where n writers are called to execute procedure "deposit" at very beginning, then these n writers would write to the buffer at same time. Similarly when the buffer is full with n records, there can be n readers to read them concurrently. You see, I can add a "yield()" at below to let n writers to write at the same time because there are multi-processors running. Am I right? 'procedure' deposit (m : mess) = > 'begin' > local r > wait (slot) > r <-- (rear++)%[n] >>>> yield(); > buf[r] <-- m > signal (message) > 'end' b)c)d): Your explanation is clear and convincing. Thank you. have a nice day, Nick Huang/Qingzhe Huang
Here I said...
I found out it is incorrect after I jogged for one mile. You see I should modify as following, because my method is to let
"index" to enumerate all semaphore. So, I don't care if it is "deposit" or "remove".
semaphore D[n], R[n];
initialize all elements of D[n] to be 1 and all elements of R[n] to be 0;
declare global variable integer i=0;
procedure deposit(m: mess)
var index: integer; //local variable
begin
index<--i++ mod n;//it doesn't matter what i is, I just want i to loop
wait(D[index]);
buf[index]<--m;
signal(R[index]); //to allow remove process to proceed
end;
procedure remove (var m: mess)
var index: integer; //local variable
begin
index<--i++ mod n;//it doesn't matter what i is, I just want i to loop
wait(R[j]);
m<--buf[index];
signal(D[index]);
end;
Here it comes...
<<我今天一天都在学校刚刚才回来,这是革命战友发来的紧急文书,请各位集思广益,看看怎么做〉〉
黄教授,多谢!
I think multi-programming is the kernel for CPU utilization. and all other steps like buffer, caching etc. And
I think your i-iv are all multi-programming. Even virtual memory is to increase multiprogramming degree.
I can only imagine of reducing fragmentation. And share memory. The hint seems to suggest virtual memory.
I don't know, maybe fragmentation again. And in Unix, we have the shared directory which means that multi-users
share the same file or directory without copying them to their own directory.
I think the number of frames allocated to a process maybe increase in some algorithm. But under some algorithm,
the number may decrease, so this
is what Belady's anomaly is.
b), [5marks]Consider a computer with a paged virtual memory, if there are
programs of vastly differing sizes, why does that make it difficult to
select a single page size? if this problem becomes extreme, what other
memory-management approach might prove necessary? Explain.
答:无
I think it is the internal fragmentation. Since the page size is big, the internal fragmentation is bigger. and
if page size is small, the swap in and out from disk is inefficient. Does he means segmentation?
c), [5marks] Consider a page-replacement algorithm for a process with four
page frames and eight pages. The first four reference are to pages: 0,1,2
and 3. Assuming a cold start, this causes four faults. The reference string
continues: 01273237103 ,List the new faults, for each of FIFO and LRU
replacement. Answer in the following form: FIFO: The new faults are 5 9 7 2
3. LRU: The new...(注:这是什么格式???搞不懂!!!)
I don't have patience to read it through and I don't understand!
I think it should be 8192/8.
I think the size of file in blocks is 2^64. Since you said it is not enough.
In order to revoke Bob's write permission, OS need to check each ACL of each file and remove that "Bob, write"
entry.
I think it is easier in capability system, since in process of Bob it stores what kind of operation Bob can do
for what object. Compared with goi g through 1000 files to remove ACL, capability is simple. Right?
And I suspect it is not what Probst expect, it seems he thinks otherwise. Just like what is said in text. But I
think textbook is discussing the access for a particular object which is of course difficult. Here is the user!
I often wonder who is more stupid, Probst or me? :)
Should we do anything to remedy
anything? As I didn't see any problem here.
Here it comes...
Even great Mokhov might have made a mistake?
Hi, The answer for a) is 2, as Mokhav says, is for the solution in the question, not for your solution; for your solution of course, the max access would be n. And his solution is fine, but still exits the problem: a producer and a consumer might access the same slot. Your solution eliminates this problem, at the cost of using more semophores and losting FIFOness; but it's ok for me :P Thanks and good luck, hui
Here it goes...
hi, I stayed at school a whole a day and just back to see your mail and thank you so much for your message! You are a genius! I didn't realize that his solution has this fault after he removed the "wait(sender) and wait(receiver)". see below is the modified solution: 'procedure' deposit (m : mess) = 'procedure' remove ('var' m : mess) = 'begin' 'begin' local r local f wait (slot) wait (message) r <-- (rear++)%[n] f <-- (front++)%[n] buf[r] <-- m m <-- buf[f] signal (message) signal (slot) 'end' 'end' And below is original solution: 'procedure' deposit (m : mess) = 'procedure' remove ('var' m : mess) = 'begin' 'begin' local r local f wait (slot) wait (message) wait (sender) wait (receiver) r <-- (rear++)%[n] f <-- (front++)%[n] signal (sender) signal (receiver) buf[r] <-- m m <-- buf[f] signal (message) signal (slot) 'end' 'end' And originally it HAS the semaphore "sender and receiver" which restricted only ONE producer and consumer which definitely guarantee NO conflict on same slots. Because the producer and consumer are all executing on an index-incremental manner or in other words, they are executing in a ascending manner. Therefore if first producer has not finished, the second producer woundn't start. Therefore the semaphore "slot and message" will work to prevent same slot being accessed by producer and consumer. (This is like our theory assignment.) But if the semaphore "sender and receiver" is removed, then there will be a situation that the producer may not be executing in ascending orders. For an example, suppose several producer is writing their slots and whenever a procucer finished his job, it will "signal the message" which will allow consumer to begin working. And this will cause trouble because the very first producer may not have finished his job. I think this is the senario that you said that "a producer and a consumer might access the same slot", right? Thank you and have a nice day, Nick Huang/Qingzhe Huang
I forget part of detail in your slides in class. So, I just checked with our O.S. textbook, and the following is my understanding about the question "who flips the mode bit"? Can you justify my understanding as below?
It seems to me that "wrap function" of C-library cannot flip the mode bit of user process. On page 43 of Silberschatz's "Operating System Concepts" 6th edition, it says,
"When a system call is executed, it is treated by the hardware as a software interrupt. Control passes through the interrupt vector to a service routine in the operating system, and the mode bit is set to monitor mode. The system-call service routine is a part of the operating system...."
And ANSI-C library is not part of OS and it cannot set the mode bit because it is itself just a software supplied by some party. The "trap" interrupt service routine is the one who checks the process identity and decide if the intended memory address is allowed for access. (i.e. check if address is between "limit" and "base" register. )
A second thought is that "flipping mode bit" is a kind of privileged instruction and can never be done at user mode. This is stated very clear in the textbook. So, I think when we make system call, we are unable to flip the mode bit even from VC++ library's wrapping function of Microsoft who happens to be the supplier of both VC++ and windows.
Thank you for your attention.
Professor gives a very detailed explanation and thanks for that:
As we discussed in class, System Calls are initiated by a wrapper function that
is part of the C libraries. This code is linked into our application and thus
resides in user space (it is not part of the kernel). The wrapper function
invokes the system call by executing the "int 0x80" assembly language
instruction. This is one of four special machine instructions (on the Intel CPU)
that allows a user process to initiate a transition to kernel space. You do not
need to be in kernel space to execute int 0x80.
When the CPU executes this instruction, it knows that a system call is taking
place. It then consults the Interrupt Descriptor Table (IDT) to find out what
routine in the kernel should be run when int 0x80 is executed. Note that 0x80 is
a hexidecimal number that translates to 128 in base 10. Basically, this means
that the CPU should jump to the address specified in location 128 of the IDT.
Where does this address come from? The IDT is a memory-resident table that is
initialized by the OS at boot time with the addresses of the various interrupt
handlers that are part of the kernel.
So the CPU then hands control to the kernel by invoking the code at the address
specified in the IDT. This code is actually the main system call handler whose
job it is to (i) determine which system call is being invoked and (ii) to jump
to the function that will actually handle the call for the user.
So that's the idea. If you read the explanation closely, you'll see that it's
consistent with the description in your textbook. Keep in mind that the Intel
processor does not actually have a global "mode bit" (other processors do). User
space and kernel space "identification" is actually assigned on a segment by
segment basis (where a segment is a block of code/data). Logically, the result
is the same, however. In any case, the user process cannot change mode directly
- it does not have the authority to do this. Instead it uses the special int
0x80 assembly language instruction to tell the CPU that it needs assistance from
the kernel. The CPU uses the previously assigned address in the IDT to pass
control to the kernel so that it can decide what to do about the user's request.
Hope this helps.
Why do we include a system header twice?
I check the appendix B where the author said
the "ourhdr.h" must be included AFTER all standard system headers. There are
two questions: 1. In sample program 1.5, why do we need to include <sys/types.h> again since "ourhdr.h" already includes <sys/types.h> ? As far as I know, in VC++6 "include" only tells pre-processor to copy whatever is in that included header into your source code. So, there seems no need to include <sys/types.h> twice. Or "gcc" implements differently from "visual c++" in pre-processor? 2. Does the sequence of included header files matter? It seems the two questions are connected. If question 1 makes sense on copying included header files into your source code, there will no need for repeated includes, and there will be no problem of include sequence. Does it make sense? I hope you can explain this to me, sir. Thank you. |
The following explanation is more than detailed! Thank you again!
First, I should make a note about the < and > symbols. As you point out, their
use results in peculiar behaviour in a posting. However, this is not a bug in
Hobbit. This simply results from that fact that what you enter into this text
box is inserted directly into an HTML page and is subsequently interpreted by
your browser. So if you use the angled brackets in your posting, they may be
interpreted by the browser as special html characters. Since the content between
the two brackets is not a valid html tag (in your case), the browser just
discards anything that is between them. The Hobbit system does some parsing (for
database and presentation purposes) but it does not parse HTML. That is the
browser's job. Even if I wanted to do this I couldn't do much with the angled
brackets since it is very difficult to determine how the brackets should be
interpreted. Browsers can't deal with this problem either so if you want to use
< and > in either a Web page or a Hobbit posting, you must "escape" the brackets
as follows:
< = <
> = >
******
Now on to the main question. I will admit that the use of the header files is a
bit confusing so I will try to clarify the situation as simply as possible.
In is almost never a good idea to physically include the same header file
multiple times in a source file. If you do, you will likely get compiler errors
about duplicate variable or function definitions. On the other hand, it is
sometimes hard to know if a header file may already have been included in some
other source file. As a result, well-written header files usually include macro
directives that prevent a header file from being included if it has already been
inserted into the current compilation unit. This is why you can generally
include any of the C library or Unix header files without worrying about
duplication. They have all been written this way. Keep in mind that this does
not happen automatically. If you write your own header files, and do not include
the conditional inclusion macros, you will produce compiler errors.
In terms of the order of the headers, it generally does not matter at all since
WITH WELL WRITTEN HEADERS only the first one will be included anyway. However,
if there are problems with the headers, or differences between headers on
different operating systems, then order may matter. This is what Stevens is
referring to. In 1992 (when the code was written), BSD UNIX had a problem with
the signal.h header file. In his header, he included a patch for this. If the
BSD file was included first, then his header would add the proper macro. If a
non-BSD unix was used, then his header wouldn't change anything. That's fine.
But if a non-BSD unix was used and his header was included first, then his
"hack" would get included and the proper definition from the standard system
header would not. This appears to be the rationale for including his header
last.
As far as the types.h file is concerned, first note that it is included in
ourhdr.h because it has to be - it's definitions may be required by the function
prototypes in the remainder of the file. So why is it included in the program
files in the text? This is good programming style. A few headers files are so
common (e.g., stdio.h) that adding them to the top of every source file just
clutters up the code. So Stevens puts these in ourhdr.h as a convenience (modern
compilers may not even require you to include them anywhere). However, as a
matter of style, all other headers files should be "#included" in the source
files that use them since it makes the interpretation/documentation of the
source file easier. As discussed above, this causes no problem with the compiler
since if the file has been included somewhere else, it won't matter at all.
I know this was a long answer but hopefully this clears up a little of the
confusion.
I think there is nothing shameful to share experience
and knowledge with others. The only problem is whether it is worth sharing.
Here it goes.
Here it goes.
////////////////////////////////////////////////////////////
The Strange Thing of Mutex and Thread in Windows
This is something I don't quite understand. According to MSDN, the owner of mutex will release its ownership after releaseMutex. But it seems not the case.
#include <stdio.h> #include <windows.h> HANDLE mutex, thread; bool canTerminate=false; DWORD WINAPI threadProc(void* param) { unsigned long result; while (!canTerminate) { if ((result=WaitForSingleObject(mutex, 1000))==WAIT_OBJECT_0) { printf("I am alive\n"); ReleaseMutex(mutex); } else { switch (result) { case WAIT_ABANDONED: printf("abandoned\n"); break; case WAIT_TIMEOUT: printf("time out\n"); break; } } Sleep(1000); } return 0; } int main() { int counter=0; unsigned long result; mutex=CreateMutex(NULL, FALSE, NULL);//===========================>>The second parameter cannot be TRUE thread=CreateThread(NULL, 0, threadProc, NULL, 0, NULL); while (counter<100) { if ((result=WaitForSingleObject(mutex, 1000))==WAIT_OBJECT_0) { printf("I am main...\n"); ReleaseMutex(mutex); counter++; } else { switch (result) { case WAIT_ABANDONED: printf("abandoned\n"); break; case WAIT_TIMEOUT: printf("time out\n"); break; } } Sleep(1000); } canTerminate=true; return 0; }
//the running result is like this:
I am main...
I am alive
I am main...
I am alive
I am main...
I am alive
I am main...
I am alive
I am main...
I am alive
I am main...
I am alive
......
This is predictable and normal. But if I change the second parameter of "CreateMutex" to be TRUE instead of FALSE,things become unexplanable. I mean the parameter only tells the first owner is the creating thread and when it releases, any thread should be able to get the ownership. But the running result is like following:
I am main...
I am main...
time out
I am main...
time out
time out
I am main...
I am main...
...
This means the thread never gets the mutex even creating thread gives up by releaseMutex.
/////////////////////////////////////////
The above is a idiot question! The creator owns mutex as if it has been "waited" once and of course no other thread can obtain it. See below correct way:
#include <stdio.h> #include <windows.h> const int MaxThreadNumber=5; int index[MaxThreadNumber]; HANDLE mutex=NULL; HANDLE threads[MaxThreadNumber]; bool canTerminate=false; DWORD WINAPI threadProc(void* param) { unsigned long result; int*ptr=(int*)param; if (*ptr==0) { mutex=CreateMutex(NULL, TRUE, NULL); if (ReleaseMutex(mutex)==0) { printf("release error\n"); exit(8); } } else { while (mutex==NULL) { Sleep(1000); } } while (!canTerminate) { if ((result=WaitForSingleObject(mutex, 1000))==WAIT_OBJECT_0) { printf("I am %d and I am alive\n", *ptr); if (ReleaseMutex(mutex)==0) { printf("release error\n"); exit(8); } } else { switch (result) { case WAIT_ABANDONED: printf("abandoned\n"); break; case WAIT_TIMEOUT: printf("I am %d and time out\n", *ptr); break; } } Sleep(1000); } return 0; } int main() { int counter=0; unsigned long result; for (int i=0; i<MaxThreadNumber; i++) { index[i]=i; threads[i]=CreateThread(NULL, 0, threadProc, index+i, 0, NULL); } while (mutex==NULL) { Sleep(1000); } while (counter<100) { if ((result=WaitForSingleObject(mutex, 1000))==WAIT_OBJECT_0) { printf("I am main... and I am alive\n"); ReleaseMutex(mutex); counter++; } else { switch (result) { case WAIT_ABANDONED: printf("abandoned\n"); break; case WAIT_TIMEOUT: printf("I am main and time out\n"); break; } } Sleep(1000); } canTerminate=true; return 0; }
///////////////////////////
I am 0 and I am alive
I am 1 and I am alive
I am 2 and I am alive
I am main... and I am alive
I am 3 and I am alive
I am 4 and I am alive
I am 4 and I am alive
I am main... and I am alive
I am 3 and I am alive
I am 1 and I am alive
I am 0 and I am alive
I am 2 and I am alive
I am 4 and I am alive
I am main... and I am alive
I am 3 and I am alive
I am 1 and I am alive
I am 0 and I am alive
I am 2 and I am alive
I am 4 and I am alive
I am main... and I am alive
I am 3 and I am alive
I am 1 and I am alive
I am 0 and I am alive
I am 2 and I am alive
I am main... and I am alive
I am 4 and I am alive
I am 3 and I am alive
I am 1 and I am alive
I am 0 and I am alive
I am 2 and I am alive
I am 4 and I am alive
I am 0 and I am alive
I am main... and I am alive
I am 2 and I am alive
I am 3 and I am alive
I am 1 and I am alive
#include <sys/sem.h> #include <sys/types.h> #include <stdio.h> #include <signal.h> #include <errno.h> const char* MUTEX_FILENAME="semaphore.c"; typedef void(*sighandler_t)(int); struct Mutex { key_t key; int semid; }; union semun { int val; struct semid_ds *buf; ushort *array; }; struct Mutex mutex; int createMutex(struct Mutex* mutex); int waitMutex(struct Mutex* mutex); int releaseMutex(struct Mutex* mutex); void alarmHandler(int param); void safeSignal(int signo, sighandler_t handler); void safeSignal(int signo, sighandler_t handler) { struct sigaction action; action.sa_handler=handler; sigemptyset(&action.sa_mask); action.sa_flags=SA_RESTART; if (sigaction(signo, &action, NULL)==-1) { printf("sigaction error\n"); exit(2); } } void alarmHandler(int param) { releaseMutex(&mutex); alarm(5); } int createMutex(struct Mutex* mutex) { int project_id=0; while (1) { while (1) { if ((mutex->key=ftok(MUTEX_FILENAME, project_id))!=-1) { break; } project_id++; } if ((mutex->semid=semget(mutex->key, 1, 0666|IPC_CREAT))!=-1) { break; } //otherwise, probably the key is used by others and printf("create mutex failed and retry new key\n"); } union semun arg; arg.val=1; if (semctl(mutex->semid, 0, SETVAL, arg)==-1) { printf("semctl fails\n"); exit(1); } return 1; } int waitMutex(struct Mutex* mutex) { struct sembuf buf={0, -1, 0}; if (semop(mutex->semid, &buf, 1)==-1) { //printf("semop wait failed\n"); perror("semop wait"); exit(3); } return 1; } int releaseMutex(struct Mutex* mutex) { struct sembuf buf={0, 1, 0}; if (semop(mutex->semid, &buf, 1)==-1) { //printf("semop release failed\n"); perror("semop release"); exit(4); } return 1; } int main() { createMutex(&mutex); safeSignal(SIGALRM, alarmHandler); alarm(5); while (1) { printf("before waiting...\n"); waitMutex(&mutex); printf("after waiting...\n"); sleep(6); } return 0; }
/////////////////////
//OK, the above is a stupid question. When I read again about textbook, it is very clear now. "semop" will return when //signal comes. This has nothing to do with safe signal. "sigaction" is a safe signal. See following:
#include <unistd.h> #include <sys/sem.h> #include <sys/types.h> #include <stdio.h> #include <signal.h> #include <errno.h> const char* MUTEX_FILENAME="semaphore.c"; typedef void(*sighandler_t)(int); extern int errno; struct Mutex { key_t key; int semid; }; union semun { int val; struct semid_ds *buf; ushort *array; }; struct Mutex mutex; int createMutex(struct Mutex* mutex); int waitMutex(struct Mutex* mutex); int releaseMutex(struct Mutex* mutex); void alarmHandler(int param); void safeSignal(int signo, sighandler_t handler); void safeSignal(int signo, sighandler_t handler) { struct sigaction action; action.sa_handler=handler; sigemptyset(&action.sa_mask); action.sa_flags=SA_RESTART; if (sigaction(signo, &action, NULL)==-1) { printf("sigaction error\n"); _exit(2); } } void alarmHandler(int param) { releaseMutex(&mutex); alarm(2); } int createMutex(struct Mutex* mutex) { int project_id=0; while (1) { while (1) { if ((mutex->key=ftok(MUTEX_FILENAME, project_id))!=-1) { break; } project_id++; } if ((mutex->semid=semget(mutex->key, 1, 0666|IPC_CREAT))!=-1) { break; } //otherwise, probably the key is used by others and printf("create mutex failed and retry new key\n"); } union semun arg; arg.val=1; if (semctl(mutex->semid, 0, SETVAL, arg)==-1) { printf("semctl fails\n"); _exit(1); } return 1; } int waitMutex(struct Mutex* mutex) { struct sembuf buf={0, -1, 0}; while (1) { if (semop(mutex->semid, &buf, 1)==-1) { //printf("semop wait failed\n"); //perror("semop wait"); if (errno!=EINTR) { perror("semop wait"); _exit(4); } } else { break; } } return 1; } int releaseMutex(struct Mutex* mutex) { struct sembuf buf={0, 1, 0}; while (1) { if (semop(mutex->semid, &buf, 1)==-1) { //printf("semop release failed\n"); if (errno!=EINTR) { perror("semop release"); _exit(4); } } else { break; } } return 1; } int main() { createMutex(&mutex); safeSignal(SIGALRM, alarmHandler); alarm(2); while (1) { printf("before waiting...\n"); waitMutex(&mutex); printf("after waiting...\n"); //sleep(6); } return 0; }