Lịch khai giảng các lớp học OFFLINE tháng 7/2014

Học tiếng Anh Giao Tiếp online: TiengAnhGiaoTiep.com

www.OnThiTOEIC.vn - Website Ôn thi TOEIC miễn phí

Trang 1 / 2 12 CuốiCuối
Hiện kết quả 1 đến 10 trong tổng số 12

Đề tài: Cộng đồng dịch sách tiếng anh chuyên ngành CNTT

  1. #1
    Tham gia ngày
    Feb 2008
    Bài gửi
    3

    Smile Cộng đồng dịch sách tiếng anh chuyên ngành CNTT

    Đều là dân IT nên chắc ai trong diễn dàn này đều muốn học tiếng anh để đọc sách tiếng Anh chuyên ngành CNTT đúng không? Vậy tại sao thay v́ phải học tập một cách riêng lẻ th́ tại sao chúng ta không cùng nhau học nhỉ? Hiện giờ ḿnh đang có nhiều sách học bằng tiếng Anh. Ḿnh mở topic này để ai có ư muốn tham gia cùng ḿnh cùng nhau dịch sang sách tiếng việt cho mọi người cùng học. Do số lượng sách nhiều nên bạn không phải bận tâm về việc thiếu sách. Trong quá tŕnh tập dịch nếu có phần nào không thể dịch được chúng ta có thể cùng nhau thảo luận để t́m ra cách dịch tốt nhất và cũng tốt cho bản thân của bạn nữa Nếu bạn nào có ư định giống ḿnh th́ xin hăy liên lạc qua email bằng Yahoo!Messenger để cùng nhau thảo luận rồi ḿnh sẽ liên kết các bạn với nhau để có thêm nhiều người để cùng chia sẽ. Email: namhung0112@yahoo.com Thanks for reading this topic.


  2. #2
    Tham gia ngày
    Mar 2010
    Bài gửi
    1

    Mặc định

    có ai giỏi tiếng anh dịch dùm trang www3.school.com . xin hậu tạ xứng đáng

    Học tiếng Anh chất lượng cao

    Lớp học OFFLINE của TiengAnh.com.vn

    Ngữ pháp, Ngữ âm, Giao tiếp, Luyện thi TOEIC

    Bạn muốn chat tiếng anh

    www.ChatTiengAnh.com

    Chat bằng tiếng Anh, luyện tiếng Anh


  3. #3
    Avatar của thich_tieng_anh
    thich_tieng_anh vẫn chÆ°a cĂ³ mặt trong diá»…n Ä‘Ă n Phụ trách Tiếng Anh chuyên ngành Công nghệ Thông tin
    Tham gia ngày
    Jan 2007
    Nơi cư ngụ
    Vũng Tàu, Việt Nam
    Bài gửi
    221

    Mặc định

    Trích Nguyên văn bởi namhung0112 Xem bài viết
    Vậy tại sao thay v́ phải học tập một cách riêng lẻ th́ tại sao chúng ta không cùng nhau học nhỉ?

    Ḿnh mở topic này để ai có ư muốn tham gia cùng ḿnh cùng nhau dịch sang sách tiếng việt cho mọi người cùng học.
    Nếu có thể như thế th́ rất hay. Tôi có thể tạo Topic riêng để các bạn cùng đưa bài dịch lên và mọi người cùng trao đổi.

    Trích Nguyên văn bởi nguyenmanhht Xem bài viết
    có ai giỏi tiếng anh dịch dùm trang www3.school.com . xin hậu tạ xứng đáng
    Hix!!! Dịch trang này chắc mất... 10 năm đấy bạn!!! Bạn hậu tạ thế nào đây để mọi người biết sẽ đầu tư?

    Thi thử TOEIC miễn phí tại TiengAnh.com.vn

    Thi thử TOEIC của TiengAnh.com.vn

    Thi thử như thi thật, đề thi sát đề thật

    Trắc nghiệm tiếng Anh

    www.LuyenTiengAnh.com

    Tổng hợp 1000+ bài trắc nghiệm tiếng Anh


  4. #4
    Tham gia ngày
    Oct 2010
    Bài gửi
    1

    Mặc định

    namhung oi minh cung dang can 1 noi de hoc tieng anh day cho minh xin duong click cua ban nha

    Học Tiếng Anh Giao tiếp

    www.TiengAnhGiaoTiep.com

    Học Tiếng Anh Giao tiếp miễn phí

    Ôn thi TOEIC miễn phí

    www.OnThiTOEIC.vn

    Ôn thi TOEIC trực tuyến miễn phí


  5. #5
    Tham gia ngày
    Mar 2008
    Bài gửi
    5

    Mặc định

    ai nhan dich tieng anh cho minh oi

    Học tiếng Anh online thu phí

    www.TruongNgoaiNgu.com.vn

    Học tiếng Anh online hiệu quả


  6. #6
    Tham gia ngày
    Mar 2008
    Bài gửi
    5

    Mặc định

    Các bạn dịch nè

    Chapter 1: Introduction to Concurrent
    Programming and Components

    1.1 Introduction

    This chapter introduces the topics of the book, particularly concurrency and components. Because the concept of concurrency, particularly as it applies to programming, is so poorly understood by novice programmers, this chapter begins by giving a working definition of concurrent programming. This definition abandons the largely useless definition of concurrency as two programs running at the same time, replacing it with a definition that deals with how concurrency affects the implementation of a solution to the problem.
    Once the definition of concurrent programming has been given, special purpose objects called concurrent components are introduced. These objects are the most interesting objects in concurrent programming because they are the ones that coordinate the
    activities in a concurrent program. Without concurrent components a concurrent program is simply a set of unrelated activities. It is the components that allow these activities to work together to solve a problem. Components are also the most difficult objects to write. This is because the activities (or active objects) correspond closely to normal procedural programs, but components require a change in the way that most programmers think
    about programs. It is also in components that the problems specific to concurrent programming, such as race conditions and deadlock, are found and dealt with. The rest of the book is about how to implement concurrent programs using these concurrent components.

    Finally, this chapter explains the different types of concurrent programs and how these programs result in various types of programs. Part of understanding concurrent programming is realizing that there is more than one reason to do concurrent programming. An important aspect of any program is that it should solve a problem. Concurrency improves the solution to many different types of problems. Each of these problem types looks at the problem to be solved in a slightly different manner an4fcj rrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrr rrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrr rrd thus requires the programmer to approach the problem in a slightly different way.

    1.2 Chapter Goals

    After completing this chapter, you should be able to:

    • Understand why concurrent programming is important.
    • Give a working definition of a concurrent program.
    • Understand the two types of synchronization and give examples of each.
    • Give a definition of the term component and know what special problems can be encountered when using components.

    • Describe several different reasons for doing concurrent programming and how each of these reasons leads to different design decisions and different program implementation.

    1.3 What Is Concurrent Programming? Lập tŕnh đồng thời là ǵ?
    The purpose of this book is to help programmers understand how to create concurrent programs. Specifically, it is intended to help programmers understand and program special concurrent objects, called concurrent components. Because these components are used only in concurrent programs, a good definition of a concurrent program is needed before components can be defined and methods given for their implementation.
    This section provides a good working definition of a concurrent program after first explaining why concurrent programming is an important concept for a programmer to know. The working definition of a concurrent program provided here will serve as a basis for understanding concurrent programming throughout the rest of the book.
    1.3.1 Why Do Concurrent Programming? Tại sao lập tŕnh đồng thời
    The first issue in understanding concurrent programming is to provide a justification for studying concurrent programming. Most students and, indeed, many professional programmers have never written a program that explicitly creates Java threads, and it is possible to have a career in programming without ever creating a thread. Therefore, many programmers believe that concurrency in programming is not used in most real systems, and so it is a sidebar that can be safely ignored. However, that the use of concurrent programming is hidden from programmers is itself a problem, as the effects of a concurrent program can seldom be safely ignored.

    When asked in class, most students would say they that have never implemented a concurrent program, but then they can be shown Exhibit 1 (Program1.1). This program puts a button in a JFrame and then calculates Fibonacci numbers in a loop. The fact that there is no way to set the value of stopProgram to false within the loop implies that the loop is infinite, and so it can never stop; however, when the button is pressed the loop eventually stops. When confronted with this behavior, most students correctly point out that when the Stop Calculation button is pressed the value of stopProgram is set to true and the loop can exit; however, at no place in the loop is the button checked to see if it has been pressed. So, some mechanism must be present that is external to the loop that allows the value of stopProgram to be changed. The mechanism that allows this value to be changed is concurrency.

    import java.awt.*;
    import java.awt.event.*;

    /**
    * Purpose: This program illustrates the presence of threads in
    * a Java program that uses a GUI. A button is created
    * that simply toggles the variable "stopProgram" to
    * false, which should stop the program. Once the
    * button is created, the main method enters an
    * infinite loop. Because the loop does not explicitly
    * call the button, there appears to be no way for the
    * program to exit. However, when the button is pushed,
    * the program sets the stopProgram to false, and
    * the program exits, illustrating that the button is
    * running in a different thread from the main method.
    */

    public class Fibonacci
    {
    private static boolean stopProgram = false;
    public static void main(String argv[]) {
    Frame myFrame = new Frame("Calculate Fibonacci Numbers"); List myList = new List(4);
    myFrame.add(myList, BorderLayout.CENTER); Button b1 = new Button("Stop Calculation"); b1.addActionListener(new ActionListener() {
    public void actionPerformed(ActionEvent e) {
    stopProgram = true;
    }
    });

    Button b2 = new Button("Exit");
    b2.addActionListener(new ActionListener() {
    public void actionPerformed(ActionEvent e) { System.exit(0);
    }
    });

    Panel p1 = new Panel();
    p1.add(b1);
    p1.add(b2);
    myFrame.add(p1, BorderLayout.SOUTH); myFrame.setSize(200, 300); myFrame.show();

    int counter = 2;
    while(true) {
    if (stopProgram)
    break;
    counter + = 1;
    myList.add("Num = "+ counter + "Fib = "+
    fibonacci(counter));
    myFrame.show();
    }

    //Note: stopProgram cannot change value to true in the above
    //loop. How does the program get to this point?
    myList.add("Program Done");
    }

    public static int fibonacci(int NI) {
    if (NI < = 1) return 1;
    return fibonacci(NI - 1) + fibonacci(NI - 2);
    }
    }

    What is happening in Exhibit 1 (Program1.1) is that, behind the scenes and hidden from
    the programmer, a separate thread, the Graphical User Interface (GUI) thread, was started. This thread is a thread started by Java that is running all the time, waiting for the Stop

    Calculation button to be pressed. When this button is pressed, the GUI thread runs for a short period of time concurrently with the main thread (the thread doing the calculation of Fibonacci numbers) and sets the value of stopProgram to true. Thus, Exhibit 1 (Program1.1) is a very simple example of a concurrent program. Because nearly every Java programmer at some point has written a program that uses buttons or other Abstract Window Tool Kit (AWT) or Swing components, nearly every Java programmer has written a concurrent program.

    This brings up the first reason to study concurrent programming. Regardless of what a programmer might think, concurrent programming is ubiquitous; it is everywhere. Programmers using visual components in nearly any language are probably using some form of concurrency to implement those components. Programmers programming distributed systems, such as programs that run on Web servers that produce Web pages, are doing concurrent programming. Programmers who write UNIX ".so" (shared object) files or Windows ".com" or ".ddl" files are writing concurrent programs. Concurrency in programs is present, if hidden, in nearly every major software project, and it is unlikely that a programmer with more than a few years left in a career could get by without encountering it at some point. And, as will be seen in the rest of the book, while the fact that a program is concurrent can be hidden, the effects of failing to account for concurrency can result in catastrophic consequences.

    The second reason to study concurrent programming is that breaking programs into parts using concurrency can significantly reduce the complexity of a program. For example, there was a time when implementing buttons, as in Exhibit 1 (Program1.1), involved requiring the loop to check whether or not a button had been pressed. This meant that a programmer had to consistently put code throughout a program to make sure that events were properly handled. Using threads has allowed this checking to be handled in a separate thread, thus relieving the program of the responsibility. The use of such threads allows programmers to write code to solve their problem, not to perform maintenance checks for other objects.

    The third reason to study concurrent programming is that its use is growing rapidly, particularly in the area of distributed systems. Every system that runs part of the program on separate computers is by nearly every definition (including the one used in this book) concurrent. This means every browser access to a Web site involves some level of concurrency. This chain of concurrency does not stop at the Web server but normally extends to the resources that the Web server program uses. How to properly implement these resources requires the programmer to at least understand the problems involved in concurrent access or the program will have problems, such as occasionally giving the wrong answer or running very slowly.

    The rest of this text is devoted to illustrating how to properly implement and control concurrency in a program and how to use concurrency with objects in order to simplify and organize a program. However, before the use of concurrency can be described, a working definition of concurrency, particularly in relationship to objects, must be given. Developing that working definition is the purpose of the rest of this chapter.

    1.3.2 A Definition of Concurrent Programming

    Properly defining a concurrent program is not an easy task. For example, the simplest definition would be when two or more programs are running at the same time, but this definition is far from satisfactory. For example, consider Exhibit 1 (Program1.1). This program has been described as concurrent, in that the GUI thread is running separately from the main thread and can thus set the value of the stopProgram variable outside of the calculation loop in the main thread. However, if this program is run on a computer with one Central Processing Unit (CPU), as most Windows computers are, it is impossible for more than one instruction to be run at a time; thus, by the simple definition given above, this program is not concurrent.

    Another program with this simple definition can be illustrated by the example of two computers, one running a word processor in San Francisco and another running a spreadsheet in Washington, D.C. By the definition of a concurrent program above, these are concurrent. However, because the two programs are in no way related, the fact that they are concurrent is really meaningless.

    It seems obvious that a good definition of concurrent programming would define the first example as concurrent and the second as not concurrent; therefore, something is fundamentally wrong with this simple definition of concurrent programming. In fact, the simple-minded notion of concurrency involving two activities occurring at the same time is a poor foundation on which to attempt to build a better definition of the term concurrency. To create a definition of concurrency that can be used to describe concurrent programming, a completely new foundation needs to be built. A better, workable definition is supplied in the rest of Section 1.3.2.

    1.3.2.1 Asynchronous Activities

    Defining a concurrent program begins by defining the basic building block of a program which will be called an activity. An activity could be formally defined as anything that could be done by an abstract Turing machine or as an algorithm. However, what is of interest here is a working definition, and it is sufficient to define an activity as simply a series of steps implemented to perform a task. Examples of an activity would be baking a pie or calculating a Fibonacci number on a computer. The steps required to perform a
    task will be called an ordering.

    Activities can be broken down into subactivities, each an activity itself. For example, baking a pie could consist of making the crust, making the filling, filling the crust with the filling, and baking the pie. For example, Exhibit 2 shows the steps in baking a pie, where the crust must first be made, then the filling made, the filling added to the crust, and the pie baked. If the order of these activities is completely fixed, then the ordering is called a total ordering, as all steps in all activities are ordered. In the case of a total orderings of events, the next step to be taken can always be determined within a single
    activity. An activity for which the order of the steps is determined by the activity is called a synchronous activity. Note that partial orderings are also controlled by synchronous activities; these are implemented by the programming equivalent of "if" and "while" statements.

    Exhibit 2: Synchronous Activity to Make a Pie



    In the case of making a pie it is not necessary to first make the crust and then make the filling. The filling could be made the night before, and the crust could then be made in the morning before combining the two to make a pie. If the order in which the crust and the filling are made can be changed, then the ordering is called a partial ordering (the order of steps to make the crust and the order of steps to make the filling remain fixed, but either can be done first). However, if one activity must always finish before the other begins, it is possible to implement this behavior with a synchronous activity.

    A special case occurs when, for a partial ordering, the next step is not determined by a single activity. To show this, several values of time must be defined. The time after
    which preparing the crust can be started is t1c, and the time that it must be completed is t2c. The time after which preparing the filling can be started is t1f, and the time that it must be completed is t2f. Now, if [(t1c < = t1f < t2c)║(t1f < = t1c < t2f)], then the activities of making the crust and the filling can (but do not necessarily have to) overlap. If the steps overlap, then the overall ordering of the steps cannot be determined within any one task or, thus, any one activity. One example of this situation for baking a pie is illustrated in the Gant chart in Exhibit 3. Note that many other timelines are possible, as the crust does not have
    to start at t1c, nor does it have to end at t1f; it simply has to occur between those two times. The same is true of making the filling. The two activities might not actually overlap; it is sufficient that they can overlap.

    The only way that these two activities can overlap in this manner is if the lists of steps for the activities are being executed independently. For example, it is possible that two
    bakers are responsible for the pie, one making the filling and the other making the crust. It is also possible that one baker is responsible for both the crust and filling, but they are switching back and forth from doing steps from one part of the recipe (making the crust) to another part of the recipe (making the filling). However they are accomplished, by the definition given here the steps involved in the two subtasks are being executed independently, or asynchronously, of each other. This type of activity is called an asynchronous activity.

    The definition of an asynchronous activity leads to a very simple definition of concurrency: Concurrency is defined as the presence of two or more asynchronous activities.

    When asynchronous activities are present in a program, it is possible (but not necessary) for the steps for the two activities to interleave. As we will see in Chapter 2, the number of different ways they can interleave can be quite large, and the results can be quite unexpected. However, note that from the definition of asynchronous activities the two activities do not have to run at the same time; they simply have to be able to run at the same time. This is a useful distinction, because the problems that will be encountered in concurrency occur not because the activities execute at the same time but because they can interleave their executions. It is also useful because if a program allows activities to interleave, it must protect against the ill effects of that interleaving whether it occurs or not. As will be seen, this means that methods that might be used concurrently must be

    synchronized even if the vast majority of the time the use of the synchronized statement provides no benefit.

    The importance of the improvement of this definition of concurrency over the definition of concurrency as multiple activities happening at the same time cannot be overemphasized. This definition implies the types of problems that can occur and the way to solve those problems. If a concurrent program does not actually run two activities at
    the same time, but it can do so, then action must be taken to make sure problems do not occur. Any argument as to whether these two activities are actually running at the same time, or if they generally run one after the other, is a moot point. Arguments about how the activities are actually implemented (for example, are priorities present in the system?) and how the implementation might affect the interactions (does the higher priority
    process always have to run first?) also do not matter. If the asynchronous activities are present, then the program must account for this behavior.

    It should be noted that the definition of asynchronous activities solves the first problem with the definition of concurrency. The two threads running in Exhibit 1 (Program1.1) are asynchronous activities, thus they are concurrent. However, the two computers
    running in different cities are also asynchronous activities, so the definition of concurrent programming must be further tightened.

    1.3.2.2 Synchronization of Asynchronous Activities

    That two or more asynchronous activities are concurrent is a good definition of concurrency, but it is not a useful definition. As was mentioned before, two asynchronous activities that are unrelated are concurrent, but that does not mean that any particular action must be considered when reasoning about them. A useful definition requires that some interaction between the activities is needed. This interaction between activities requires that the activities must coordinate (or synchronize), so this section will define how synchronization affects the activities.

    Sebesta [SEB99] says that, "Synchronization is a mechanism that controls the order in which tasks execute." In terms of activities, this definition suggests that, while the asynchronous activities represent separate control over the execution of steps in the activity, at times the asynchronous activities agree to come together and cooperate in order to create valid partial orderings within the activities. Sebesta defines two types of synchronization, competitive synchronization and cooperative synchronization. To see how synchronization affects the partial orderings within an asynchronous activity, examples of these two types of synchronization are given.

    Exhibit 4 gives examples of both types of synchronization. The figure illustrates two asynchronous activities: making a pie crust and making a filling. These two activities will synchronize, first showing synchronization of a shared resource for competitive synchronization and, second, showing synchronization around an event for cooperative synchronization. To understand competitive synchronization, consider what would
    happen if both recipes called for mixing the ingredients in a large bowl but only one large bowl is available so both activities must use the same bowl. If both activities used the bowl without considering the actions of the other activity it would be possible to mix the

    filling and the crust at the same time, which would result in an incorrect solution to the problem of making the pie. Therefore, the two activities must compete for the use of the resource and synchronize on it with the rule that when one activity is using it the other cannot continue until the resource becomes free. In this example, the bowl is a shared resource that the two activities must synchronize on in order to correctly solve this problem, in this case referred to as competitive synchronization.

    The second type of synchronization occurs when asynchronous activities must wait on an event to occur before continuing. In Exhibit 4, this occurs when the making of the pie crust and the filling must both be completed before the filling can be added to the crust and the pie baked. Because the two activities must cooperate in waiting for this event,
    this type of synchronization is called cooperative synchronization. Note that, while the synchronization does impose a partial ordering on the asynchronous activities, it does not make them synchronous. Except for when the activities must synchronize for some reason, they are still asynchronous.

    1.3.2.3 Concurrent Programming

    With the introduction of asynchronous activities and synchronization, the background is now in place to define concurrency. Concurrency is the presence of asynchronous activities that interact and thus must at some point in their execution implement either competitive or cooperative synchronization. This is a workable definition of concurrency as it requires activities that do not actually run at the same time but which behave as if they do. It also requires that they must synchronize to be considered concurrent.

    Now that a workable definition of concurrency has been built, it is relatively easy to build a definition of a concurrent program:

    Concurrent Program: A program that contains asynchronous activities which synchronize at one or more points or on one or more resources during execution.

    By design, this definition does not specify how asynchronous activities are implemented in the program. These activities might be Ada tasks, UNIX processes, Pthreads, or Java threads. It also does not say how the synchronization is achieved, which once again could be through Ada select statements, UNIX operating system calls, or use of a Java synchronized statement. Further, it does not say how the activities communicate, whether by method calls in a single process, interprocess communications such as UNIX pipes, or Remote Method Invocation (RMI), to processes on completely different computers.
    These are all just details of individual concurrent programs, but the basic principals of concurrency will always be the same.

    [1]The term component is poorly defined and is used in object-oriented programming. Because some readers might use the term in non-concurrent contexts, the concept is introduced as concurrent components here; however, all components in this book are concurrent components, so the concurrent part of the term will be dropped, and the term component will represent a concurrent component.

    1.4 Components

    An interesting way to look at a concurrent program is to think of it as containing two
    types of units, activities that act on other entities or entities that control the interactions of these activities. If these units are objects, then in a concurrent program all objects in that program can be made to be either active (asynchronous activities such as threads) or passive (such as a shared resource or an event that is used for synchronization). Other types of simple, non-concurrent objects are used by active and passive objects, such as
    vectors or StringTokenizers, but these are not involved in the concurrency in the program.

    Most programmers do not have problems understanding active objects, as they are simply instructions that are written and executed in a procedural order that, in principle, can be represented by a flow chart, nor do they have problems understanding non-concurrent objects. This is probably because the behavior of the object can normally be understood
    in the context of the activity within which it is being run, much like a procedural program. This is how students have been taught to program since their first introductory class.

    However, passive objects, which from now on will be called concurrent components or simply components, are much more difficult for most programmers. This is likely because they provide the infrastructure for the asynchronous activities that executed in a concurrent program. This is a somewhat foreign concept to many programmers.

    Components in the example of making a pie are the shared mixing bowl and the event that signifies that preparation of the crust and filling is completed. They control the behavior of the asynchronous activities so that they coordinate and produce a correct result. They also sit between asynchronous activities and are shared and used by multiple asynchronous activities.

    Note that not all objects that are non-active are components. For example, a vector is safe to use in a multi-threaded program, but it is not a component because even if it is used by a number of threads it is not normally used to coordinate between those threads. Objects are added or removed from the vector, but the vector is used just to store data elements, not to coordinate the asynchronous activities. A special type of vector called a bounded buffer (presented in Chapter 3) is actually used to coordinate between asynchronous activities.

    Because components provide an infrastructure for asynchronous activities and coordinate between these activities, they have a number of characteristics that must be considered that do not exist when implementing normal objects. Some of these characteristics are enumerated here:

    • Because components coordinate between several threads, they cannot be created or owned by a single thread; therefore, some mechanism must be used to allow these objects to be registered, or to register themselves, with other objects representing the asynchronous activities. Many mechanisms are available to do this, ranging from using a simple parameter in a constructor to special purpose methods in GUI components to entire protocols such as Light-Weight Directory Access Protocol (LDAP) for remote objects.

    • Because components are used in separate asynchronous activities and, in the extreme case of distributed computing, on physically different computers, some mechanism must be implemented to allow the components to communicate with the asynchronous activities. Once again, these mechanisms range from simple method invocation in the case of threads to entire protocols when distributed objects are used.
    • Unlike objects for asynchronous activities, which can be designed using procedural flow, the logic in a component is generally organized around the state of the component when it is executed. Some mechanism needs to be designed to effectively implement the components to allow them to provide this coordination (see Chapter 3).
    • Some harmful interactions, called race conditions, can occur if the objects are not properly designed. One way to avoid race conditions is to make all the methods in the object synchronized and not allow an object to give up the object's lock while it is executing. This is called complete synchronization and is sufficient for non- component objects such as a string or a vector; however, components must coordinate between several objects, and complete synchronization is too
    restrictive to effectively implement this coordination. Much of the rest of the book is concerned with how to safely relax the synchronized conditions.
    • A second type of harmful interaction, called a deadlock, can result if the component is not properly designed. Deadlock can occur in any concurrent program when objects are improperly handled; however, the possibility of deadlock can actually be built into components that are not designed properly, even if the component is used correctly. Several examples of deadlock are provided in the text, particularly in Chapter 7 on Java events.

    Two examples are given here to show how these conditions affect a component. The first example of a component is a button object. A button provides a service to the GUI thread by relaying the occurrence of an event (the button has been pressed) to any other objects that are interested in this event (the Listeners). In Exhibit 1 (Program1.1), the button is created in the main thread and then passed to the GUI thread by adding it to the Frame object. It is then used by other threads that are interested in knowing when the button is pressed through the addActionListener methods in the button. Thus, the button is independent of the threads that use it (the GUI thread or any threads associated with the ActionListeners). So, the button is an independent object that provides a coordination service between multiple other threads as well as the service of informing other asynchronous activities (in this case, threads) that the button was pressed. A special mechanism, called an event, is used to allow the button to communicate with the threads with which it is interfacing. For this simple program, it is not necessary to worry about
    the state of the button or race or deadlock conditions, but the reasons why these could affect even a simple button are covered in detail in subsequent chapters.

    Another example of components is most distributed services that use distributed objects such as RMI, Common Object Request Broker (CORBA), or Enterprise Java Beans (EJB). When using distributed objects, the components exist on centrally located servers and provide services to remote clients, such as a Web browser on a PC, which are processes or programs running on other computers and can access the components

    through a network. In the case of distributed programs, all five of the problems that can occur in components (listed above) are of vital importance, as will be seen in Chapter 13.

    1.5 Types of Concurrent Programming

    Before continuing to describe component programming, it is necessary to clear up some misconceptions about concurrent programming. Programmers often believe that concurrent programming is something that involves just one type of problem. For example, some programmers believe that all concurrent processing involves speeding up very large simulations, such as simulations of weather or of seismic activity in the Earth's crust. Other programmers believe that concurrent programming addresses only problems that occur when an operating system is run on a computer. Still others believe that concurrent programming is required only in distributed systems. Because these programmers approach the problems of concurrency with preconceived biases as to the type of problem they want to solve, they do not understand the methodologies for concurrency that address problems other than the ones in which they are interested.

    There is a very wide variety of reasons to use concurrency, and each of these reasons to implement concurrency in a program results in programs that are structured differently. There is really no one "best" way to implement concurrency and synchronization, as it really depends on the type of problem being solved. Below is a list of some of the reasons why concurrent programming might be used. Each type of concurrent program is accompanied by a description of how the type of problem to be solved affects the type of solution that is developed. This text is largely interested in using concurrent
    programming for soft real time, distributed, and modeling purposes. While the techniques used do apply to other systems, more appropriate solutions usually apply to those problems. Also, note that a program is seldom any one type of concurrent program; often it will exhibit characteristics of many of the program types:

    • Incidental concurrency. Incidental concurrency occurs when concurrency exists but the asynchronous activities do not interact with each other. An extreme example would be a stand-alone computer in Washington running Word and a stand-alone computer in San Francisco running Excel. Incidental concurrency also occurs on operating systems such as UNIX, where multiple users are using a
    single computer but each user's program does not interact with any other program. So, while concurrency exists and must be taken into account in the operating system, from the point of view of the user's program no concurrent behavior must be considered. Incidental concurrency is really not very interesting and is not considered further in this book.
    • Resource utilization. Resource utilization, which is often associated with operating systems, occurs when a program is built around shared resources. For example, concurrency was implemented in the first operating systems to keep the expensive CPU occupied doing useful work on one program while another performed Input/Output (IO). This same principal occurs in a PC where some parts of a program can be designed around special-purpose hardware, such as a graphics or IO processor, which is really a separate CPU and thus running asynchronously to the main processor. This type of concurrency is often handled by the compiler or the operating system and is normally transparent to the

    programmer. When doing this type of concurrent programming, the programmer writes the program around the special resources that are present and shared. This type of concurrent programming is normally covered in books on operating systems and are not considered further in this book.
    • Distributed programming. In a distributed program, not all of the resources required by a program exist on a single computer but instead reside somewhere on a network of computers. To take advantage of these distributed resources, programs are designed around locating and accessing the resources. This can involve special methods and protocols to find the resources, such as with RMI
    using rmiregistry, and even writing entire protocols, as with socket-level protocols.
    • Parallel computing. Parallel computing is used when a program requires a large amount of real (clock) time, such as weather prediction models. These models can be calculated more rapidly by using a number of processors to work simultaneously on the problem. Parallel programs are designed around finding sections of the program that could be efficiently calculated in parallel. This is
    often accomplished by using special compilers that can take language structures such as loops and organize them so that they can be run on separate processors in parallel. Some systems add extensions to languages to help the compiler make these decisions.
    • Reactive programming. Reactive programs are programs for which some part of the program reacts to an external stimulus generated in another program or process. The two types of reactive programs are hard real time and soft real time.
    o Hard real time. Hard real time programs are programs that must meet a specific timing requirement. For example, the computer on a rocket must be able to guarantee that course adjustments are made every 1/1000th of a second; otherwise, the rocket will veer off course. Hard real time
    programs are designed around meeting these timing constraints and are
    often designed using timing diagrams to ensure that events are processed in the allotted time. The programs are then implemented in low-level languages, such as Assembly or C, which allow for control every clock cycle used by the computer.
    o Soft real time. Soft real time programs process the information in real time, as opposed to a batch mode, where the information is updated once or
    twice a day. These programs use current data but do not meet hard deadlines. One example is a Web-based ordering system that always has the most recent data but could take several seconds to provide it to the client. These systems are often designed around the services they provide, where the services are sometimes implemented as transactions. Objects that are components often process these transactions.
    • Availability. For some programs such as E-commerce Web sites it is important that they are accessible 24 hours a day, 7 days a week. Concurrency can be used
    to replicate the critical parts of the program and run them on multiple independent computers. This guarantees that the program will continue to be available even if one of the processors fails. These programs are designed so that critical pieces can be replicated and distributed to multiple processors. These systems are often soft real time programs with special capabilities to ensure their availability; thus, they use components in their design.

    • Ease of implementation. Using concurrent programming can make it easier to implement a program. This is true of most GUI programs, where concurrency with components makes it easier to implement buttons, TextFields, etc. Many of the objects used in these systems are designed as components.
    • System modeling. Sometimes concurrent programming is used because it better supports the abstract model of the system. These programs are often simulation programs modeled using objects, where some of the objects are active and some are passive. These programs are designed around making the abstract program model as close to the real-world problem as possible. Many of the objects that are modeled in these systems are components.

    1.6 Conclusion

    The purpose of this book is to help programmers, particularly students, understand how to apply components in programs and the special issues that are involved in writing programs for concurrent environments. To accomplish this, this chapter has provided a definition of a concurrent program that will be used as a basis for the rest of the book. It has also given a basic definition of a component that will be expanded in the rest of the book. It is hoped that this book will help the reader understand how to apply components to problems where they are needed, thus adding another tool to their toolbox of ways to solve problems.


  7. #7
    Tham gia ngày
    Nov 2010
    Bài gửi
    1

    Mặc định

    ai chỉ cho ḿnh cach nói tieng anh mot cach tự tin di.... ko nói được buồn quá...


  8. #8
    Tham gia ngày
    Jun 2010
    Nơi cư ngụ
    http://tuoitredonganh.vn/diendan/forum.php
    Bài gửi
    185

    Mặc định

    Trích Nguyên văn bởi 232010078078 Xem bài viết
    Các bạn dịch nè

    Chapter 1: Introduction to Concurrent
    Programming and Components

    1.1 Introduction

    This chapter introduces the topics of the book, particularly concurrency and components. Because the concept of concurrency, particularly as it applies to programming, is so poorly understood by novice programmers, this chapter begins by giving a working definition of concurrent programming. This definition abandons the largely useless definition of concurrency as two programs running at the same time, replacing it with a definition that deals with how concurrency affects the implementation of a solution to the problem.
    Once the definition of concurrent programming has been given, special purpose objects called concurrent components are introduced. These objects are the most interesting objects in concurrent programming because they are the ones that coordinate the
    activities in a concurrent program. Without concurrent components a concurrent program is simply a set of unrelated activities. It is the components that allow these activities to work together to solve a problem. Components are also the most difficult objects to write. This is because the activities (or active objects) correspond closely to normal procedural programs, but components require a change in the way that most programmers think
    about programs. It is also in components that the problems specific to concurrent programming, such as race conditions and deadlock, are found and dealt with. The rest of the book is about how to implement concurrent programs using these concurrent components.

    Finally, this chapter explains the different types of concurrent programs and how these programs result in various types of programs. Part of understanding concurrent programming is realizing that there is more than one reason to do concurrent programming. An important aspect of any program is that it should solve a problem. Concurrency improves the solution to many different types of problems. Each of these problem types looks at the problem to be solved in a slightly different manner an4fcj rrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrr rrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrr rrd thus requires the programmer to approach the problem in a slightly different way.

    1.2 Chapter Goals

    After completing this chapter, you should be able to:

    • Understand why concurrent programming is important.
    • Give a working definition of a concurrent program.
    • Understand the two types of synchronization and give examples of each.
    • Give a definition of the term component and know what special problems can be encountered when using components.

    • Describe several different reasons for doing concurrent programming and how each of these reasons leads to different design decisions and different program implementation.

    1.3 What Is Concurrent Programming? Lập tŕnh đồng thời là ǵ?
    The purpose of this book is to help programmers understand how to create concurrent programs. Specifically, it is intended to help programmers understand and program special concurrent objects, called concurrent components. Because these components are used only in concurrent programs, a good definition of a concurrent program is needed before components can be defined and methods given for their implementation.
    This section provides a good working definition of a concurrent program after first explaining why concurrent programming is an important concept for a programmer to know. The working definition of a concurrent program provided here will serve as a basis for understanding concurrent programming throughout the rest of the book.
    1.3.1 Why Do Concurrent Programming? Tại sao lập tŕnh đồng thời
    The first issue in understanding concurrent programming is to provide a justification for studying concurrent programming. Most students and, indeed, many professional programmers have never written a program that explicitly creates Java threads, and it is possible to have a career in programming without ever creating a thread. Therefore, many programmers believe that concurrency in programming is not used in most real systems, and so it is a sidebar that can be safely ignored. However, that the use of concurrent programming is hidden from programmers is itself a problem, as the effects of a concurrent program can seldom be safely ignored.

    When asked in class, most students would say they that have never implemented a concurrent program, but then they can be shown Exhibit 1 (Program1.1). This program puts a button in a JFrame and then calculates Fibonacci numbers in a loop. The fact that there is no way to set the value of stopProgram to false within the loop implies that the loop is infinite, and so it can never stop; however, when the button is pressed the loop eventually stops. When confronted with this behavior, most students correctly point out that when the Stop Calculation button is pressed the value of stopProgram is set to true and the loop can exit; however, at no place in the loop is the button checked to see if it has been pressed. So, some mechanism must be present that is external to the loop that allows the value of stopProgram to be changed. The mechanism that allows this value to be changed is concurrency.

    import java.awt.*;
    import java.awt.event.*;

    /**
    * Purpose: This program illustrates the presence of threads in
    * a Java program that uses a GUI. A button is created
    * that simply toggles the variable "stopProgram" to
    * false, which should stop the program. Once the
    * button is created, the main method enters an
    * infinite loop. Because the loop does not explicitly
    * call the button, there appears to be no way for the
    * program to exit. However, when the button is pushed,
    * the program sets the stopProgram to false, and
    * the program exits, illustrating that the button is
    * running in a different thread from the main method.
    */

    public class Fibonacci
    {
    private static boolean stopProgram = false;
    public static void main(String argv[]) {
    Frame myFrame = new Frame("Calculate Fibonacci Numbers"); List myList = new List(4);
    myFrame.add(myList, BorderLayout.CENTER); Button b1 = new Button("Stop Calculation"); b1.addActionListener(new ActionListener() {
    public void actionPerformed(ActionEvent e) {
    stopProgram = true;
    }
    });

    Button b2 = new Button("Exit");
    b2.addActionListener(new ActionListener() {
    public void actionPerformed(ActionEvent e) { System.exit(0);
    }
    });

    Panel p1 = new Panel();
    p1.add(b1);
    p1.add(b2);
    myFrame.add(p1, BorderLayout.SOUTH); myFrame.setSize(200, 300); myFrame.show();

    int counter = 2;
    while(true) {
    if (stopProgram)
    break;
    counter + = 1;
    myList.add("Num = "+ counter + "Fib = "+
    fibonacci(counter));
    myFrame.show();
    }

    //Note: stopProgram cannot change value to true in the above
    //loop. How does the program get to this point?
    myList.add("Program Done");
    }

    public static int fibonacci(int NI) {
    if (NI < = 1) return 1;
    return fibonacci(NI - 1) + fibonacci(NI - 2);
    }
    }

    What is happening in Exhibit 1 (Program1.1) is that, behind the scenes and hidden from
    the programmer, a separate thread, the Graphical User Interface (GUI) thread, was started. This thread is a thread started by Java that is running all the time, waiting for the Stop

    Calculation button to be pressed. When this button is pressed, the GUI thread runs for a short period of time concurrently with the main thread (the thread doing the calculation of Fibonacci numbers) and sets the value of stopProgram to true. Thus, Exhibit 1 (Program1.1) is a very simple example of a concurrent program. Because nearly every Java programmer at some point has written a program that uses buttons or other Abstract Window Tool Kit (AWT) or Swing components, nearly every Java programmer has written a concurrent program.

    This brings up the first reason to study concurrent programming. Regardless of what a programmer might think, concurrent programming is ubiquitous; it is everywhere. Programmers using visual components in nearly any language are probably using some form of concurrency to implement those components. Programmers programming distributed systems, such as programs that run on Web servers that produce Web pages, are doing concurrent programming. Programmers who write UNIX ".so" (shared object) files or Windows ".com" or ".ddl" files are writing concurrent programs. Concurrency in programs is present, if hidden, in nearly every major software project, and it is unlikely that a programmer with more than a few years left in a career could get by without encountering it at some point. And, as will be seen in the rest of the book, while the fact that a program is concurrent can be hidden, the effects of failing to account for concurrency can result in catastrophic consequences.

    The second reason to study concurrent programming is that breaking programs into parts using concurrency can significantly reduce the complexity of a program. For example, there was a time when implementing buttons, as in Exhibit 1 (Program1.1), involved requiring the loop to check whether or not a button had been pressed. This meant that a programmer had to consistently put code throughout a program to make sure that events were properly handled. Using threads has allowed this checking to be handled in a separate thread, thus relieving the program of the responsibility. The use of such threads allows programmers to write code to solve their problem, not to perform maintenance checks for other objects.

    The third reason to study concurrent programming is that its use is growing rapidly, particularly in the area of distributed systems. Every system that runs part of the program on separate computers is by nearly every definition (including the one used in this book) concurrent. This means every browser access to a Web site involves some level of concurrency. This chain of concurrency does not stop at the Web server but normally extends to the resources that the Web server program uses. How to properly implement these resources requires the programmer to at least understand the problems involved in concurrent access or the program will have problems, such as occasionally giving the wrong answer or running very slowly.

    The rest of this text is devoted to illustrating how to properly implement and control concurrency in a program and how to use concurrency with objects in order to simplify and organize a program. However, before the use of concurrency can be described, a working definition of concurrency, particularly in relationship to objects, must be given. Developing that working definition is the purpose of the rest of this chapter.

    1.3.2 A Definition of Concurrent Programming

    Properly defining a concurrent program is not an easy task. For example, the simplest definition would be when two or more programs are running at the same time, but this definition is far from satisfactory. For example, consider Exhibit 1 (Program1.1). This program has been described as concurrent, in that the GUI thread is running separately from the main thread and can thus set the value of the stopProgram variable outside of the calculation loop in the main thread. However, if this program is run on a computer with one Central Processing Unit (CPU), as most Windows computers are, it is impossible for more than one instruction to be run at a time; thus, by the simple definition given above, this program is not concurrent.

    Another program with this simple definition can be illustrated by the example of two computers, one running a word processor in San Francisco and another running a spreadsheet in Washington, D.C. By the definition of a concurrent program above, these are concurrent. However, because the two programs are in no way related, the fact that they are concurrent is really meaningless.

    It seems obvious that a good definition of concurrent programming would define the first example as concurrent and the second as not concurrent; therefore, something is fundamentally wrong with this simple definition of concurrent programming. In fact, the simple-minded notion of concurrency involving two activities occurring at the same time is a poor foundation on which to attempt to build a better definition of the term concurrency. To create a definition of concurrency that can be used to describe concurrent programming, a completely new foundation needs to be built. A better, workable definition is supplied in the rest of Section 1.3.2.

    1.3.2.1 Asynchronous Activities

    Defining a concurrent program begins by defining the basic building block of a program which will be called an activity. An activity could be formally defined as anything that could be done by an abstract Turing machine or as an algorithm. However, what is of interest here is a working definition, and it is sufficient to define an activity as simply a series of steps implemented to perform a task. Examples of an activity would be baking a pie or calculating a Fibonacci number on a computer. The steps required to perform a
    task will be called an ordering.

    Activities can be broken down into subactivities, each an activity itself. For example, baking a pie could consist of making the crust, making the filling, filling the crust with the filling, and baking the pie. For example, Exhibit 2 shows the steps in baking a pie, where the crust must first be made, then the filling made, the filling added to the crust, and the pie baked. If the order of these activities is completely fixed, then the ordering is called a total ordering, as all steps in all activities are ordered. In the case of a total orderings of events, the next step to be taken can always be determined within a single
    activity. An activity for which the order of the steps is determined by the activity is called a synchronous activity. Note that partial orderings are also controlled by synchronous activities; these are implemented by the programming equivalent of "if" and "while" statements.

    Exhibit 2: Synchronous Activity to Make a Pie



    In the case of making a pie it is not necessary to first make the crust and then make the filling. The filling could be made the night before, and the crust could then be made in the morning before combining the two to make a pie. If the order in which the crust and the filling are made can be changed, then the ordering is called a partial ordering (the order of steps to make the crust and the order of steps to make the filling remain fixed, but either can be done first). However, if one activity must always finish before the other begins, it is possible to implement this behavior with a synchronous activity.

    A special case occurs when, for a partial ordering, the next step is not determined by a single activity. To show this, several values of time must be defined. The time after
    which preparing the crust can be started is t1c, and the time that it must be completed is t2c. The time after which preparing the filling can be started is t1f, and the time that it must be completed is t2f. Now, if [(t1c < = t1f < t2c)║(t1f < = t1c < t2f)], then the activities of making the crust and the filling can (but do not necessarily have to) overlap. If the steps overlap, then the overall ordering of the steps cannot be determined within any one task or, thus, any one activity. One example of this situation for baking a pie is illustrated in the Gant chart in Exhibit 3. Note that many other timelines are possible, as the crust does not have
    to start at t1c, nor does it have to end at t1f; it simply has to occur between those two times. The same is true of making the filling. The two activities might not actually overlap; it is sufficient that they can overlap.

    The only way that these two activities can overlap in this manner is if the lists of steps for the activities are being executed independently. For example, it is possible that two
    bakers are responsible for the pie, one making the filling and the other making the crust. It is also possible that one baker is responsible for both the crust and filling, but they are switching back and forth from doing steps from one part of the recipe (making the crust) to another part of the recipe (making the filling). However they are accomplished, by the definition given here the steps involved in the two subtasks are being executed independently, or asynchronously, of each other. This type of activity is called an asynchronous activity.

    The definition of an asynchronous activity leads to a very simple definition of concurrency: Concurrency is defined as the presence of two or more asynchronous activities.

    When asynchronous activities are present in a program, it is possible (but not necessary) for the steps for the two activities to interleave. As we will see in Chapter 2, the number of different ways they can interleave can be quite large, and the results can be quite unexpected. However, note that from the definition of asynchronous activities the two activities do not have to run at the same time; they simply have to be able to run at the same time. This is a useful distinction, because the problems that will be encountered in concurrency occur not because the activities execute at the same time but because they can interleave their executions. It is also useful because if a program allows activities to interleave, it must protect against the ill effects of that interleaving whether it occurs or not. As will be seen, this means that methods that might be used concurrently must be

    synchronized even if the vast majority of the time the use of the synchronized statement provides no benefit.

    The importance of the improvement of this definition of concurrency over the definition of concurrency as multiple activities happening at the same time cannot be overemphasized. This definition implies the types of problems that can occur and the way to solve those problems. If a concurrent program does not actually run two activities at
    the same time, but it can do so, then action must be taken to make sure problems do not occur. Any argument as to whether these two activities are actually running at the same time, or if they generally run one after the other, is a moot point. Arguments about how the activities are actually implemented (for example, are priorities present in the system?) and how the implementation might affect the interactions (does the higher priority
    process always have to run first?) also do not matter. If the asynchronous activities are present, then the program must account for this behavior.

    It should be noted that the definition of asynchronous activities solves the first problem with the definition of concurrency. The two threads running in Exhibit 1 (Program1.1) are asynchronous activities, thus they are concurrent. However, the two computers
    running in different cities are also asynchronous activities, so the definition of concurrent programming must be further tightened.

    1.3.2.2 Synchronization of Asynchronous Activities

    That two or more asynchronous activities are concurrent is a good definition of concurrency, but it is not a useful definition. As was mentioned before, two asynchronous activities that are unrelated are concurrent, but that does not mean that any particular action must be considered when reasoning about them. A useful definition requires that some interaction between the activities is needed. This interaction between activities requires that the activities must coordinate (or synchronize), so this section will define how synchronization affects the activities.

    Sebesta [SEB99] says that, "Synchronization is a mechanism that controls the order in which tasks execute." In terms of activities, this definition suggests that, while the asynchronous activities represent separate control over the execution of steps in the activity, at times the asynchronous activities agree to come together and cooperate in order to create valid partial orderings within the activities. Sebesta defines two types of synchronization, competitive synchronization and cooperative synchronization. To see how synchronization affects the partial orderings within an asynchronous activity, examples of these two types of synchronization are given.

    Exhibit 4 gives examples of both types of synchronization. The figure illustrates two asynchronous activities: making a pie crust and making a filling. These two activities will synchronize, first showing synchronization of a shared resource for competitive synchronization and, second, showing synchronization around an event for cooperative synchronization. To understand competitive synchronization, consider what would
    happen if both recipes called for mixing the ingredients in a large bowl but only one large bowl is available so both activities must use the same bowl. If both activities used the bowl without considering the actions of the other activity it would be possible to mix the

    filling and the crust at the same time, which would result in an incorrect solution to the problem of making the pie. Therefore, the two activities must compete for the use of the resource and synchronize on it with the rule that when one activity is using it the other cannot continue until the resource becomes free. In this example, the bowl is a shared resource that the two activities must synchronize on in order to correctly solve this problem, in this case referred to as competitive synchronization.

    The second type of synchronization occurs when asynchronous activities must wait on an event to occur before continuing. In Exhibit 4, this occurs when the making of the pie crust and the filling must both be completed before the filling can be added to the crust and the pie baked. Because the two activities must cooperate in waiting for this event,
    this type of synchronization is called cooperative synchronization. Note that, while the synchronization does impose a partial ordering on the asynchronous activities, it does not make them synchronous. Except for when the activities must synchronize for some reason, they are still asynchronous.

    1.3.2.3 Concurrent Programming

    With the introduction of asynchronous activities and synchronization, the background is now in place to define concurrency. Concurrency is the presence of asynchronous activities that interact and thus must at some point in their execution implement either competitive or cooperative synchronization. This is a workable definition of concurrency as it requires activities that do not actually run at the same time but which behave as if they do. It also requires that they must synchronize to be considered concurrent.

    Now that a workable definition of concurrency has been built, it is relatively easy to build a definition of a concurrent program:

    Concurrent Program: A program that contains asynchronous activities which synchronize at one or more points or on one or more resources during execution.

    By design, this definition does not specify how asynchronous activities are implemented in the program. These activities might be Ada tasks, UNIX processes, Pthreads, or Java threads. It also does not say how the synchronization is achieved, which once again could be through Ada select statements, UNIX operating system calls, or use of a Java synchronized statement. Further, it does not say how the activities communicate, whether by method calls in a single process, interprocess communications such as UNIX pipes, or Remote Method Invocation (RMI), to processes on completely different computers.
    These are all just details of individual concurrent programs, but the basic principals of concurrency will always be the same.

    [1]The term component is poorly defined and is used in object-oriented programming. Because some readers might use the term in non-concurrent contexts, the concept is introduced as concurrent components here; however, all components in this book are concurrent components, so the concurrent part of the term will be dropped, and the term component will represent a concurrent component.

    1.4 Components

    An interesting way to look at a concurrent program is to think of it as containing two
    types of units, activities that act on other entities or entities that control the interactions of these activities. If these units are objects, then in a concurrent program all objects in that program can be made to be either active (asynchronous activities such as threads) or passive (such as a shared resource or an event that is used for synchronization). Other types of simple, non-concurrent objects are used by active and passive objects, such as
    vectors or StringTokenizers, but these are not involved in the concurrency in the program.

    Most programmers do not have problems understanding active objects, as they are simply instructions that are written and executed in a procedural order that, in principle, can be represented by a flow chart, nor do they have problems understanding non-concurrent objects. This is probably because the behavior of the object can normally be understood
    in the context of the activity within which it is being run, much like a procedural program. This is how students have been taught to program since their first introductory class.

    However, passive objects, which from now on will be called concurrent components or simply components, are much more difficult for most programmers. This is likely because they provide the infrastructure for the asynchronous activities that executed in a concurrent program. This is a somewhat foreign concept to many programmers.

    Components in the example of making a pie are the shared mixing bowl and the event that signifies that preparation of the crust and filling is completed. They control the behavior of the asynchronous activities so that they coordinate and produce a correct result. They also sit between asynchronous activities and are shared and used by multiple asynchronous activities.

    Note that not all objects that are non-active are components. For example, a vector is safe to use in a multi-threaded program, but it is not a component because even if it is used by a number of threads it is not normally used to coordinate between those threads. Objects are added or removed from the vector, but the vector is used just to store data elements, not to coordinate the asynchronous activities. A special type of vector called a bounded buffer (presented in Chapter 3) is actually used to coordinate between asynchronous activities.

    Because components provide an infrastructure for asynchronous activities and coordinate between these activities, they have a number of characteristics that must be considered that do not exist when implementing normal objects. Some of these characteristics are enumerated here:

    • Because components coordinate between several threads, they cannot be created or owned by a single thread; therefore, some mechanism must be used to allow these objects to be registered, or to register themselves, with other objects representing the asynchronous activities. Many mechanisms are available to do this, ranging from using a simple parameter in a constructor to special purpose methods in GUI components to entire protocols such as Light-Weight Directory Access Protocol (LDAP) for remote objects.

    • Because components are used in separate asynchronous activities and, in the extreme case of distributed computing, on physically different computers, some mechanism must be implemented to allow the components to communicate with the asynchronous activities. Once again, these mechanisms range from simple method invocation in the case of threads to entire protocols when distributed objects are used.
    • Unlike objects for asynchronous activities, which can be designed using procedural flow, the logic in a component is generally organized around the state of the component when it is executed. Some mechanism needs to be designed to effectively implement the components to allow them to provide this coordination (see Chapter 3).
    • Some harmful interactions, called race conditions, can occur if the objects are not properly designed. One way to avoid race conditions is to make all the methods in the object synchronized and not allow an object to give up the object's lock while it is executing. This is called complete synchronization and is sufficient for non- component objects such as a string or a vector; however, components must coordinate between several objects, and complete synchronization is too
    restrictive to effectively implement this coordination. Much of the rest of the book is concerned with how to safely relax the synchronized conditions.
    • A second type of harmful interaction, called a deadlock, can result if the component is not properly designed. Deadlock can occur in any concurrent program when objects are improperly handled; however, the possibility of deadlock can actually be built into components that are not designed properly, even if the component is used correctly. Several examples of deadlock are provided in the text, particularly in Chapter 7 on Java events.

    Two examples are given here to show how these conditions affect a component. The first example of a component is a button object. A button provides a service to the GUI thread by relaying the occurrence of an event (the button has been pressed) to any other objects that are interested in this event (the Listeners). In Exhibit 1 (Program1.1), the button is created in the main thread and then passed to the GUI thread by adding it to the Frame object. It is then used by other threads that are interested in knowing when the button is pressed through the addActionListener methods in the button. Thus, the button is independent of the threads that use it (the GUI thread or any threads associated with the ActionListeners). So, the button is an independent object that provides a coordination service between multiple other threads as well as the service of informing other asynchronous activities (in this case, threads) that the button was pressed. A special mechanism, called an event, is used to allow the button to communicate with the threads with which it is interfacing. For this simple program, it is not necessary to worry about
    the state of the button or race or deadlock conditions, but the reasons why these could affect even a simple button are covered in detail in subsequent chapters.

    Another example of components is most distributed services that use distributed objects such as RMI, Common Object Request Broker (CORBA), or Enterprise Java Beans (EJB). When using distributed objects, the components exist on centrally located servers and provide services to remote clients, such as a Web browser on a PC, which are processes or programs running on other computers and can access the components

    through a network. In the case of distributed programs, all five of the problems that can occur in components (listed above) are of vital importance, as will be seen in Chapter 13.

    1.5 Types of Concurrent Programming

    Before continuing to describe component programming, it is necessary to clear up some misconceptions about concurrent programming. Programmers often believe that concurrent programming is something that involves just one type of problem. For example, some programmers believe that all concurrent processing involves speeding up very large simulations, such as simulations of weather or of seismic activity in the Earth's crust. Other programmers believe that concurrent programming addresses only problems that occur when an operating system is run on a computer. Still others believe that concurrent programming is required only in distributed systems. Because these programmers approach the problems of concurrency with preconceived biases as to the type of problem they want to solve, they do not understand the methodologies for concurrency that address problems other than the ones in which they are interested.

    There is a very wide variety of reasons to use concurrency, and each of these reasons to implement concurrency in a program results in programs that are structured differently. There is really no one "best" way to implement concurrency and synchronization, as it really depends on the type of problem being solved. Below is a list of some of the reasons why concurrent programming might be used. Each type of concurrent program is accompanied by a description of how the type of problem to be solved affects the type of solution that is developed. This text is largely interested in using concurrent
    programming for soft real time, distributed, and modeling purposes. While the techniques used do apply to other systems, more appropriate solutions usually apply to those problems. Also, note that a program is seldom any one type of concurrent program; often it will exhibit characteristics of many of the program types:

    • Incidental concurrency. Incidental concurrency occurs when concurrency exists but the asynchronous activities do not interact with each other. An extreme example would be a stand-alone computer in Washington running Word and a stand-alone computer in San Francisco running Excel. Incidental concurrency also occurs on operating systems such as UNIX, where multiple users are using a
    single computer but each user's program does not interact with any other program. So, while concurrency exists and must be taken into account in the operating system, from the point of view of the user's program no concurrent behavior must be considered. Incidental concurrency is really not very interesting and is not considered further in this book.
    • Resource utilization. Resource utilization, which is often associated with operating systems, occurs when a program is built around shared resources. For example, concurrency was implemented in the first operating systems to keep the expensive CPU occupied doing useful work on one program while another performed Input/Output (IO). This same principal occurs in a PC where some parts of a program can be designed around special-purpose hardware, such as a graphics or IO processor, which is really a separate CPU and thus running asynchronously to the main processor. This type of concurrency is often handled by the compiler or the operating system and is normally transparent to the

    programmer. When doing this type of concurrent programming, the programmer writes the program around the special resources that are present and shared. This type of concurrent programming is normally covered in books on operating systems and are not considered further in this book.
    • Distributed programming. In a distributed program, not all of the resources required by a program exist on a single computer but instead reside somewhere on a network of computers. To take advantage of these distributed resources, programs are designed around locating and accessing the resources. This can involve special methods and protocols to find the resources, such as with RMI
    using rmiregistry, and even writing entire protocols, as with socket-level protocols.
    • Parallel computing. Parallel computing is used when a program requires a large amount of real (clock) time, such as weather prediction models. These models can be calculated more rapidly by using a number of processors to work simultaneously on the problem. Parallel programs are designed around finding sections of the program that could be efficiently calculated in parallel. This is
    often accomplished by using special compilers that can take language structures such as loops and organize them so that they can be run on separate processors in parallel. Some systems add extensions to languages to help the compiler make these decisions.
    • Reactive programming. Reactive programs are programs for which some part of the program reacts to an external stimulus generated in another program or process. The two types of reactive programs are hard real time and soft real time.
    o Hard real time. Hard real time programs are programs that must meet a specific timing requirement. For example, the computer on a rocket must be able to guarantee that course adjustments are made every 1/1000th of a second; otherwise, the rocket will veer off course. Hard real time
    programs are designed around meeting these timing constraints and are
    often designed using timing diagrams to ensure that events are processed in the allotted time. The programs are then implemented in low-level languages, such as Assembly or C, which allow for control every clock cycle used by the computer.
    o Soft real time. Soft real time programs process the information in real time, as opposed to a batch mode, where the information is updated once or
    twice a day. These programs use current data but do not meet hard deadlines. One example is a Web-based ordering system that always has the most recent data but could take several seconds to provide it to the client. These systems are often designed around the services they provide, where the services are sometimes implemented as transactions. Objects that are components often process these transactions.
    • Availability. For some programs such as E-commerce Web sites it is important that they are accessible 24 hours a day, 7 days a week. Concurrency can be used
    to replicate the critical parts of the program and run them on multiple independent computers. This guarantees that the program will continue to be available even if one of the processors fails. These programs are designed so that critical pieces can be replicated and distributed to multiple processors. These systems are often soft real time programs with special capabilities to ensure their availability; thus, they use components in their design.

    • Ease of implementation. Using concurrent programming can make it easier to implement a program. This is true of most GUI programs, where concurrency with components makes it easier to implement buttons, TextFields, etc. Many of the objects used in these systems are designed as components.
    • System modeling. Sometimes concurrent programming is used because it better supports the abstract model of the system. These programs are often simulation programs modeled using objects, where some of the objects are active and some are passive. These programs are designed around making the abstract program model as close to the real-world problem as possible. Many of the objects that are modeled in these systems are components.

    1.6 Conclusion

    The purpose of this book is to help programmers, particularly students, understand how to apply components in programs and the special issues that are involved in writing programs for concurrent environments. To accomplish this, this chapter has provided a definition of a concurrent program that will be used as a basis for the rest of the book. It has also given a basic definition of a component that will be expanded in the rest of the book. It is hoped that this book will help the reader understand how to apply components to problems where they are needed, thus adding another tool to their toolbox of ways to solve problems.

    dài qua bạn ơi lần sau bạn cho ngắn thôi nhé . Dịch thử một vài đoạn trong chapter 1 thôi xem thế nào.

    Chapter 1:Giới Thiệu về đồng thời lập tŕnh và thành phần

    1.1 Giới thiệu

    Chương này giới thiệu các chủ đề của cuốn sách, đặc biệt đồng thời và các thành phần. Bởi v́ khái niệm tương tranh, đặc biệt là khi nó được áp dụng để lập tŕnh, như vậy là chưa được hiểu rơ bởi các lập tŕnh viên mới làm quen, chương này bắt đầu bằng cách đưa ra một định nghĩa làm việc của lập tŕnh đồng. Định nghĩa này từ bỏ các định nghĩa phần lớn là vô dụng của đồng thời hai chương tŕnh đang chạy đồng thời, thay thế nó bằng một định nghĩa mà đề với cách đồng thời ảnh hưởng đến việc thực hiện một giải pháp cho vấn đề.
    Một khi các định nghĩa của chương tŕnh đồng thời đă được đưa ra, các đối tượng mục đích đặc biệt gọi là thành phần đồng thời được giới thiệu. Những đối tượng này là những đối tượng thú vị nhất trong lập tŕnh đồng thời bởi v́ họ là những người phối hợp
    hoạt động trong một chương tŕnh đồng thời. Nếu không có thành phần đồng thời một chương tŕnh đồng thời là đơn giản là một tập hợp các hoạt động không liên quan. Đó là các thành phần cho phép các hoạt động này để làm việc với nhau để giải quyết vấn đề. Các thành phần được cũng là đối tượng khó khăn nhất để viết. Điều này là do các hoạt động (hoặc hoạt động đối tượng) tương ứng chặt chẽ với các chương tŕnh b́nh thường theo thủ tục, nhưng các thành phần đ̣i hỏi một sự thay đổi trong cách nghĩ rằng hầu hết các lập tŕnh viên
    về chương tŕnh. Nó cũng có trong thành phần mà các vấn đề cụ thể để lập tŕnh đồng thời, chẳng hạn như điều kiện chủng tộc và bế tắc, được t́m thấy và xử lư. Phần c̣n lại của cuốn sách là về làm thế nào để thực hiện chương tŕnh đồng thời sử dụng các thành phần này đồng thời.

    Cuối cùng, chương này giải thích những loại khác nhau của các chương tŕnh đồng thời và kết quả các chương tŕnh này như thế nào trong các loại chương tŕnh. Một phần của sự hiểu biết lập tŕnh đồng thời là nhận ra rằng có nhiều lư do để làm lập tŕnh đồng. Một khía cạnh quan trọng của chương tŕnh nào là nó sẽ giải quyết một vấn đề. Đồng thời cải thiện các giải pháp cho nhiều loại khác nhau của vấn đề. Mỗi loại vấn đề xem xét các vấn đề được giải quyết một cách hơi khác nhau do đó đ̣i hỏi an4fcj rrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrr rrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrr rrd lập tŕnh để tiếp cận vấn đề theo một cách hơi khác nhau.

    1.2 Mục tiêu của chương này

    Sau khi hoàn tất chương này, bạn sẽ có thể:

    • Hiểu được lư do tại sao chương tŕnh đồng thời là quan trọng.
    • Cho một định nghĩa làm việc của một chương tŕnh đồng thời.
    • Hiểu được hai loại đồng bộ hóa và đưa ra ví dụ của từng.
    • Cho một định nghĩa của các thành phần hạn và biết những ǵ các vấn đề đặc biệt có thể gặp phải khi sử dụng các thành phần.

    • Mô tả một vài lư do khác nhau để thực hiện chương tŕnh đồng thời và làm thế nào mỗi của những lư do dẫn đến quyết định thiết kế khác nhau và thực hiện chương tŕnh khác nhau.

    Lập tŕnh đồng thời là ǵ?

    Mục đích của cuốn sách này là để giúp các lập tŕnh viên hiểu làm thế nào để tạo ra các chương tŕnh đồng thời. Cụ thể, nó được thiết kế để giúp các lập tŕnh viên hiểu và chương tŕnh đồng thời các đối tượng đặc biệt, được gọi là thành phần đồng thời. Bởi v́ các thành phần này chỉ được sử dụng trong các chương tŕnh đồng thời, một định nghĩa tốt của một chương tŕnh đồng thời là cần thiết trước khi các thành phần có thể được định nghĩa và phương pháp được thực hiện.
    Phần này cung cấp một định nghĩa làm việc tốt của một chương tŕnh đồng thời sau khi đă giải thích lư do tại sao chương tŕnh đồng thời là một khái niệm quan trọng đối với một lập tŕnh để biết. Định nghĩa làm việc của một chương tŕnh đồng thời cung cấp ở đây sẽ là cơ sở cho sự hiểu biết lập tŕnh đồng thời trong suốt phần c̣n lại của cuốn sách.

    1.3.1 Tại sao lập tŕnh đồng thời?

    Vấn đề đầu tiên trong sự hiểu biết lập tŕnh đồng thời là cung cấp một sự biện minh cho việc học lập tŕnh đồng. Hầu hết sinh viên và, thực sự, nhiều người lập tŕnh chuyên nghiệp đă không bao giờ viết một chương tŕnh Java tạo ra một cách rơ ràng chủ đề, và nó có thể có một nghề nghiệp trong chương tŕnh mà không bao giờ tạo ra một thread. V́ vậy, nhiều người lập tŕnh đồng thời tin tưởng rằng trong lập tŕnh không được sử dụng trong hầu hết các hệ thống thực sự, và do đó, nó là một bên có thể được bỏ qua. Tuy nhiên, việc sử dụng chương tŕnh đồng thời là ẩn từ các lập tŕnh chính nó là một vấn đề, như những tác động của một chương tŕnh đồng thời ít khi có thể được bỏ qua.

    Khi được hỏi trong lớp học, hầu hết học sinh sẽ nói rằng họ đă không bao giờ thực hiện một chương tŕnh đồng thời, nhưng sau đó họ có thể được hiển thị Phụ lục 1 (Program1.1). Chương tŕnh này sẽ đặt một nút trong JFrame và sau đó tính toán các số Fibonacci trong một ṿng lặp. Thực tế là không có cách nào để thiết lập giá trị của stopProgram sai trong ṿng lặp nghĩa là ṿng lặp là vô hạn, và v́ vậy nó không bao giờ có thể dừng lại, tuy nhiên, khi nút được nhấn ṿng cuối cùng dừng lại. Khi phải đối mặt với hành vi này, hầu hết học sinh một cách chính xác chỉ ra rằng khi nút được nhấn Stop Tính toán giá trị của stopProgram được thiết lập là đúng sự thật và có thể thoát khỏi ṿng lặp, tuy nhiên, tại không có chỗ trong ṿng lặp là nút kiểm tra để xem nó có được nhấn. V́ vậy, một số cơ chế phải có mặt ở bên ngoài đối với các ṿng lặp cho phép giá trị của stopProgram phải thay đổi. Cơ chế cho phép giá trị này phải được thay đổi là tương tranh.

    import java.awt.*;
    import java.awt.event.*;

    /**
    * Purpose: This program illustrates the presence of threads in
    * a Java program that uses a GUI. A button is created
    * that simply toggles the variable "stopProgram" to
    * false, which should stop the program. Once the
    * button is created, the main method enters an
    * infinite loop. Because the loop does not explicitly
    * call the button, there appears to be no way for the
    * program to exit. However, when the button is pushed,
    * the program sets the stopProgram to false, and
    * the program exits, illustrating that the button is
    * running in a different thread from the main method.
    */

    public class Fibonacci
    {
    private static boolean stopProgram = false;
    public static void main(String argv[]) {
    Frame myFrame = new Frame("Calculate Fibonacci Numbers"); List myList = new List(4);
    myFrame.add(myList, BorderLayout.CENTER); Button b1 = new Button("Stop Calculation"); b1.addActionListener(new ActionListener() {
    public void actionPerformed(ActionEvent e) {
    stopProgram = true;
    }
    });

    Button b2 = new Button("Exit");
    b2.addActionListener(new ActionListener() {
    public void actionPerformed(ActionEvent e) { System.exit(0);
    }
    });

    Panel p1 = new Panel();
    p1.add(b1);
    p1.add(b2);
    myFrame.add(p1, BorderLayout.SOUTH); myFrame.setSize(200, 300); myFrame.show();

    int counter = 2;
    while(true) {
    if (stopProgram)
    break;
    counter + = 1;
    myList.add("Num = "+ counter + "Fib = "+
    fibonacci(counter));
    myFrame.show();
    }

    //Note: stopProgram cannot change value to true in the above
    //loop. How does the program get to this point?
    myList.add("Program Done");
    }

    public static int fibonacci(int NI) {
    if (NI < = 1) return 1;
    return fibonacci(NI - 1) + fibonacci(NI - 2);
    }
    }

    Các bạn dịch tiếp nhé


  9. #9
    Tham gia ngày
    Apr 2011
    Bài gửi
    1

    Smile

    Trích Nguyên văn bởi 232010078078 Xem bài viết
    ai nhan dich tieng anh cho minh oi
    ok, liên hệ với ḿnh ở địa chỉ: kute7f@yahoo.com nha!


  10. #10
    Tham gia ngày
    Apr 2011
    Bài gửi
    2

    Mặc định

    anh dịch hay đấy. Em có 1 bài khá dài và phải dịch trong 3 ngày. Chắc muốn dịch để bit thêm kinh nghiệm và năng cao tay nghề nhưng khó quá. Đây chỉ là 1 phần rất nhỏ trong bài các bác thử xem: 1. Introduction

    Data-hiding is a technique used to embed a se-
    quence of bits in a host image with small visual dete-
    rioration and the means to extract it afterwards. Most
    data-hiding techniques modify and consequently dis-
    tort the host signal in order to insert the additional in-
    formation. This distortion is usually small but irre-
    versible. Reversible data-hidings insert information
    bits by modifying the host signal, but enable the exact
    (lossless) restoration of the original host signal after
    extracting the embedded information. Sometimes, ex-
    pressions like distortion-free, invertible, lossless or
    erasable watermarking are used as synonyms for re-
    versible watermarking.


Trang 1 / 2 12 CuốiCuối

Đề tài tương tự

  1. Cách học hiệu quả tiếng Anh chuyên ngành CNTT
    By drkhanhb in forum Tiếng Anh ngành Công nghệ Thông tin
    Trả lời: 26
    Bài cuối: 14-07-2012, 10:15 PM
  2. Có ai giúp em dịch đoạn tiếng anh chuyên ngành CNTT này không ạ??
    By huongdo.1309 in forum Tiếng Anh ngành Công nghệ Thông tin
    Trả lời: 0
    Bài cuối: 23-09-2011, 01:35 AM
  3. Các bạn giúp ḿnh dịch đoạn văn bản tiếng anh chuyên ngành CNTT với
    By onlylovebx in forum Tiếng Anh ngành Công nghệ Thông tin
    Trả lời: 5
    Bài cuối: 02-02-2010, 11:42 AM
  4. Tiếng anh chuyên ngành cntt
    By o0Aslantis0o in forum Tiếng Anh ngành Công nghệ Thông tin
    Trả lời: 1
    Bài cuối: 17-02-2008, 10:10 PM

Quyền sử dụng diễn đàn

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  
 Lớp học tiếng Anh chất lượng cao tại Hà Nội

luyện thi toeic, luyen thi toeic, sách luyện thi toeic, luyện thi toeic online miễn phí, trung tâm luyện thi toeic, tài liệu luyện thi toeic miễn phí, luyện thi toeic ở đâu, trung tam tieng anh uy tin tai ha noi, hoc tieng anh mien phi, trung tâm tiếng anh, trung tam tieng anh, trung tâm ngoại ngữ, trung tam ngoai ngu, học tiếng anh, hoc tieng anh, dạy tiếng anh, dạy tiếng anh uy tín, trung tâm tiếng anh uy tín, tiếng Anh giao tiếp, tieng Anh giao tiep, Tieng Anh Giao tiep online, Tieng Anh Giao tiep truc tuyen, Tiếng Anh Giao tiếp online, Tiếng Anh Giao tiếp trực tuyến, học tiếng Anh Giao tiếp tốt