新书推介:《语义网技术体系》
作者:瞿裕忠,胡伟,程龚
   XML论坛     W3CHINA.ORG讨论区     计算机科学论坛     SOAChina论坛     Blog     开放翻译计划     新浪微博  
 
  • 首页
  • 登录
  • 注册
  • 软件下载
  • 资料下载
  • 核心成员
  • 帮助
  •   Add to Google

    >> It is the theory that decides what can be observed. - Albert Einstein
    [返回] 中文XML论坛 - 专业的XML技术讨论区计算机理论与工程『 理论计算机科学 』 → Problems and Projections in CS for the Next 49 Years (zz) 查看新帖用户列表

      发表一个新主题  发表一个新投票  回复主题  (订阅本版) 您是本帖的第 5486 个阅读者浏览上一篇主题  刷新本主题   树形显示贴子 浏览下一篇主题
     * 贴子主题: Problems and Projections in CS for the Next 49 Years (zz) 举报  打印  推荐  IE收藏夹 
       本主题类别:     
     Logician 帅哥哟,离线,有人找我吗?天蝎座1984-10-28
      
      
      威望:9
      头衔:逻辑爱好者
      等级:研三(收到IBM CRL的Offer了)(版主)
      文章:1219
      积分:10357
      门派:IEEE.ORG.CN
      注册:2005/3/12

    姓名:(无权查看)
    城市:(无权查看)
    院校:(无权查看)
    给Logician发送一个短消息 把Logician加入好友 查看Logician的个人资料 搜索Logician在『 理论计算机科学 』的所有贴子 点击这里发送电邮给Logician  访问Logician的主页 引用回复这个贴子 回复这个贴子 查看Logician的博客楼主
    发贴心情 Problems and Projections in CS for the Next 49 Years (zz)

    【 以下文字转载自 小百合BBS Algorithm 讨论区 】
    【 原文由 starfish@lilybbs 所发表 】

    Problems and Projections in CS for the Next 49 Years

    JOHN MC CARTHY
    Stanford University, Stanford, California


    Projection 50 years ahead is difficult, so I have eased my problem by
    offering problems and projections for only 49 years. My projections
    concern areas in which I have worked---artificial intelligence,
    mathematical theory of computation, and computer systems. However, I
    am surely not uptodate in any of these areas, so some of what I
    project for the future may have already happened. I also talk about
    the problems caused by the way computing and computer science are
    practiced. We shouldn't exaggerate. The computer has caused big
    changes in society, but these probably aren't as great as those caused
    by the automobile.

    The future is a continuation of the past, so we treat progress in
    areas of computers and computer science as a continuation of their
    histories.

    1. The Personal Computer.

    Let's begin by dating some facilities provided by com puters. In each
    case the first date is when some people had the facility and the
    second date when it was widespread.

    1.1 Computer Cost of Writing and Email becomes Marginal.

    By the late 1970s for some people and by the 1990s for almost all,
    computer power for writing and for email did not have to be rationed.

    1.2 Online All the Time at Home or at Work.

    For a few (me) this started in 1968. By the 1990s, it was
    widespread. The obvious culmination is that via a computer attached to
    clothing or even built into the body, one will be online all the
    time. It seems to me that the main payoff has already been achieved
    with computers at home and in the office. The additional time a person
    spends doing something with a computer through having it all the time
    will be a small fraction of what he is doing already.

    1.3 Email to Worldwide Destinations. 1970 to 1990s

    1.4 Point and Click. 1980s.

    1.5 Pocket Computer. 1990 to ?.

    They aren't good enough for me yet. Surely we'll soon have rollable
    pocket displays or head mounted displays good enough so that one can
    wear them continuously. Having the computer facilities of one's desk
    computer or laptop always at hand will be nice, but it will be a minor
    Author's address: John McCarthy, Computer Science Department, Stanford
    University, Stanford, CA 94305. email: jmc@cs.stanford.edu. url:
    http://wwwformal.stanford.edu/jmc/. improvement, because the reason
    one is away from one's computer is to engage in some noncomputer
    activity.

    1.6 Online Buying and Selling via the Web. 1990s.

    This has a lot further to go.

    1.7 Search Engines. 1980s to present.

    Google is the first reasonably adequate web search engine. It can't
    answer questions but can often find an adequate humanreadable source
    of information.

    In several important respects, the use of computers is more difficult
    than it was in the 1970s and 1980s. The complexity of basic facilities
    like editors and operating systems and application programs has grown
    now that limitations on RAM and disk have enormously relaxed.

    Point and click has made it very difficult, even for programmers, to
    customize their environments. It has encouraged the authoritarian
    tendencies of people who design systems for other people to
    use. Another difficulty arises from the Microsoft and Apple basic
    software being proprietary and secret. Linux has made a minor
    improvement. Now the ordinary computer user needs the services of
    system administrators who do no programming but know how the
    facilities are connected together.

    Perhaps this situation can be relieved by programs that can understand
    the features of system programs and can connect them in order to
    achieve goals specified in interaction with the user. Doing this right
    involves some AI but much less than humanlevel AI.

    Here are some areas with problems to solve and opportunities for
    progress

    2. Programming Languages.

    In some important respects, recently developed languages, for example,
    Java, are a step backward from Lisp. For example, while Java does
    include automatic garbage collection (which its ancestor C++ did not),
    it still doesn't include functions as firstclass objects (like Lisp
    lambdaexpressions); these facilitate the creation of programs which
    themselves inspect, synthesize, or otherwise manipulate programs [and
    this has many important applications]. A Lisp program can look at its
    own internal structure and modify itself or create new Lisp program
    and interpret it right away. A Java program could modify itself at run
    time only by reference to the byte code, not the source code. Lisp
    also benefits from its in ternal form being list structure with the
    lead word saying what the expression is, for example, an assignment
    statement is (setq ...). To recognize an assignment statement, a Java
    program must search for =. The Java program can also have used a Java
    parser to parse itself into a suitable data structure.

    In the next 49 years, programs that extend themselves at run time will
    become important.

    I see opportunities for improvement in regarding inputoutput as
    consisting of speech acts. Thus, computers can make requests or
    promises to people or other computer systems. They can also ask
    questions and give truthful and responsive answers to the questions.

    The proper performance of speech acts gives new specifications for
    programs. For example, we can ask whether a program keeps its promises
    or at least intends to keep them. The speech acts can be represented
    in machines as strings, or as XML, or (better) Lisp Sexpressions, but
    they will have an abstract syntax and semantics at a higher level, and
    promise (proposition, person) can be a program statement meaning to
    promise to person that propostion will be true. See
    http://wwwformal.stanford.edu/jmc/elephant.htmlfor some ideas that
    are still not at the level of concrete proposals.

    3. Mathematical Theory of Computation.

    The mathematical relations of computer programs, computable functions
    and data structures.

    3.1 Computability and Computational Complexity.

    This is the one area in which computer science has developed deep
    mathematical questions, for example, P = NP. I trust others will
    characterize what may be expected from it in the next 49 years.

    3.2 Channel Capacity?

    Here's a problem whose solution would be as impressive as the
    discovery of NPcompleteness: Is there a computational analog of
    Shannon's channel capacity theorem for communication? Shannon's
    theorem says that it is possible to transmit information over a
    communication channel at a rate arbitarily close its channel capacity
    with an arbitrarily low probability of error. A corresponding theorem
    for computation would say approximately that a computer of speed V
    would be able to do a computation of size X in time X/V given enough
    memory.

    3.3 Proving Correctness of Computer Programs.

    In principle, noone should pay money for a computer program until its
    specifications are expressed formally and the program is proved to
    meet them and the prove is mechanically checked. Considerable progress
    was made in the 1960s and 1970s, the outstanding achievement being the
    BoyerMoore interactive theorem prover.  While progress continues, it
    has been slowed by demands for shortterm payoffs.

    Today there is already great emphasis on computer programs that
    interact with people or other programs. Therefore, specification and
    verification of such programs will become increasingly important. As
    early as the 1970s, there was work on specifying text editors and
    proving that they met their specifications.

    The inputs and outputs of interactive programs are often well regarded
    as speech acts as discussed in Austin [1962], Searle [1969], and
    McCarthy [1996]. Speech acts include promises, requests, acceptances
    and denials of requests, questions, answers to questions, and
    pronouncements. Each of these, as discussed in McCarthy [1996], has
    conditions for its correct execution, e.g. promises should be kept and
    answers to questions should be true and responsive. These aspects of
    programs will become increasingly important.

    4. Artificial Intelligence.

    The long term goal of artificial intelligence research should be
    humanlevel AI, that is, computer programs with at least the
    intellectual capabilities of humans. There are two main approaches to
    seeking this goal.

    The biological approach builds agents that imitate features of the
    physiology or psychology of humans. Most cognitive science agents
    imitate humans at the psychological level; connectionist systems and
    their neural net relatives imitate at the physiological level.

    For progress, the biological AI researchers need to figure out how to
    represent facts independent of their purpose, to make systems capable
    of sequential behavi or and to figure out what information to build
    into their systems corresponding to the rich information that human
    babies are born possessing.

    The engineering approaches to AI regard the world as presenting
    certain kinds of problems to an agent trying to survive and achieve
    goals. It studies directly how to achieve goals. The logical approach
    is a variety of the engineering approach. A logical agent represents
    what it knows in logical formulas and infers that certain actions or
    strategies are appropriate to achieve its goals.

    The logical agents of the next 49 years need at least (1) continued
    existence over time, (2) improved ability to reason about action and
    change, (3) more elaboration tolerant formalisms, (4) the ability to
    represent and reason about approximately defined entities, (5) enough
    selfawareness and introspection to learn from the successes and
    failures of their previous reasoning, (6) domain-dependent control of
    theorem provers and problem solvers, and (7) identifying the most
    basic commonsense knowledge and getting it into the computer.  This
    must include knowledge that people use without being able to formulate
    it verbally.

    The classical AI problems---the frame problem, the qualification
    problem and the ramification problem [Shanahan 1997]---have been
    solved for particular for malisms for representing information about
    action and change. However, these currently known representation
    formalisms are unsatisfactory in various respects.  Improved ways of
    representing action and change may present new versions of the frame,
    qualification and representation problems.

    It's a race to humanlevel AI, and I think the logical approach is
    ahead. Why?  The logical approach to AI has faced and partly solved
    problems that all approaches will face sooner or later. Mainly they
    concern identifying and representing as data and programs information
    about how the world works. Humans represent this information
    internally, but only some of it is verbally accessible.  The logical
    approach also has the advantage that when we achieve human-level AI we
    will understand how intelligence works. Some of the evolutionary
    approaches might achieve an intelligent machine without anyone
    understanding how it works.

    What are the problems faced by logical AI? Here are a few.

    4.1 Facts about Action and Change.

    This has been the major concentration of work in logical AI. Present
    formalisms are pretty good for representing facts about discrete
    sequences of actions and their consequences. Programs exist for the
    automatic translations of planning problems so described into
    propositional theories, and the propositional problem solvers are
    often adequate to solve the problems. The situation is not so good for
    continuous change, concurrent events, and problems involving the
    actions of several agents. We can hope for progress in the next 50
    years.  

    4.2 Elaboration Tolerance Including the Frame Problem.

    Existing AI systems, including both logical and biological, are
    extremely specialized in what in formation they can take into
    account. Taking into account new information, even information not
    contradicting previous knowledge, often requires rebuild ing the
    system from scratch. Logical AI has studied elaboration tolerance, a
    special cases of which is the presented by the frame problem and the
    qualifi cation problem. The frame problem involves the implicit
    specification of what doesn't change when an event occurs, and the
    qualification problem involves elaborating the sufficient conditions
    for an event to have its normal effect.  [Shanahan 1997] treats
    elaboration tolerance and there is also my
    http://wwwformal.stanford.edu/jmc/elaboration.html.

    4.3 Nonmonotonic Reasoning.

    Humans and machines usually reach conclusions on the basis of
    incomplete knowledge. These conclusions may change when new knowledge
    becomes available. Probability theory is applicable to a part of the
    problem, but more general nonmonotonic logics seem to be also
    required.

    4.4 The Three Dimensional World: Approximate Knowledge.

    The knowledge available to a person or robot of its three dimensional
    surroundings and of the objects it needs to manipulate is almost
    always very approximate, though sometimes precise geometric models
    like rectangular parallelepiped approximate real objects well
    enough. Logical AI needs a general theory of approximate objects.

    4.5 The Relation between Appearance and Reality.

    We and any robots we may build live in a world of threedimensional
    objects built up from substances that are, in turn, made of
    molecules. Our direct information about this world comes directly from
    senses like vision and hearing that only carry partial information
    about the objects. We, even babies, are built to infer information
    about the three dimensional objects from observations and from general
    information about what kinds of objects there are.

    Machine learning and most AI have treated classifying appearances but
    don't go beyond appearance to the reality. I have a puzzle about this
    (http://wwwformal.stanford.edu/jmc/appearance.html). I think the
    relation of appearance and reality will be an important topic of AI
    research in the next 49 years.

    Logical AI has faced the above phenomena. Other approaches haven't
    faced them explicitly. Maybe it will turn out that they don't have to,
    but I have seen no arguments to that effect.

    My views on many of the specific problems are in articles published in
    various journals; almost all are to be found on my web page
    http://wwwformal.stanford.edu/jmc/. There is no complete summary,
    but http://wwwformal.stanford.edu/jmc/whatisai.html and
    http://wwwformal. stanford.edu/jmc/logicalai.html express the
    logical AI point of view.  [Shanahan 1997] covers much of the logical
    AI approach.

    No approach is close to humanlevel AI and or within development
    range. No one can convincingly say that given a billion dollars he
    could reach humanlevel.  A critical level of AI will be reached, as
    Douglas Lenat pointed out, when an AI system has enough basic common
    sense information to be able to get more information by reading books.

    Humanlevel intelligence is a difficult scientific problem and
    probably needs some new ideas. These are more likely to be invented by
    a person of genius than as part of a Government or industry
    project. Of course, present ideas and techniques are sufficient for
    many useful applications.

    Still we can ask for a subjective probability that humanlevel AI will
    be reached in the next 49 years. In the past, I have said that
    humanlevel AI will take be tween 5 and 500 years. I'll guess 0.5
    probability in the next 49 years but a 0.25 probability that 49 years
    from now, the problems will be just as confusing as they are today.

    4.6 What if We Get HumanLevel AI

    in the Next 49 Years? The current speculation about the consequences
    of getting humanlevel AI is mostly beside the point, because it
    concentrates on the possible personalities of the intelligent agents.
    Present ideas about what humanlevel AI will be like seem to come from
    sciencefiction. Consequently discussions of policies concerning AI are
    presently beside the point.

    Because of the speed of computers, qualitatively humanlevel AImeans
    quantitatively superhuman AI. Its immediate applications will be to
    get advice relevant to decisions of what to do. Even here the
    speculation, especially science fiction, is beside the point. In the
    typical story, the human asks what to do, the machine answers, and
    taking the machine's advice has unexpected and unpleasant
    consequences. The proper use of an AI advisor is to ask it the
    consequences of many possible courses of action and to give the
    reasoning leading to the machine's opinion.

    The main danger is of people using AI to take unfair advantage of
    other people.  However, we won't know enough to regulate it until we
    see what it actually looks like. I would hate to see the American
    presidential candidates in 2004 asked for their positions on AI.

    5. Quantum Computers.

    I believe they will be made to work. There are many physical phenomena
    being tried and the discovery of quantum error correction makes
    getting a general quantum computer a finite task. On the other hand,
    there is still only Shor's factoring algorithm as an example of a
    spectacularly useful quantum algorithm. The next 49 years will surely
    tell us what quantum comput ers are good for. I'll conjecture that
    factoring will turn out to be a paradigmatic example.

    6. Economics of Information.

    Many costeffective improvements in the way information is handled are
    not done, because society is not organized to pay for them
    properly. We can hope for improvements in economic organization of in
    formation in the next 49 years. Here are some examples.

    (1) Since about 1970, it has been economically feasible to put the
    world's lit erature, approximately the U.S. Library of Congress, on
    line and make it available worldwide. It is now rather cheap, but John
    Ockerbloom's library catalog www.digital.library.upenn.edu lists only
    about 20,000 freely available books. The situation was made worse by
    the extension of copyright from 75 to 95 years. The economic interests
    of publishers have dominated those of authors and readers. There is
    more money available than ever before to pay authors for their talent
    and work. This will get even better if the pub lishers can be
    squeezed as the new technologies warrant. Technologically, there is
    still a little ways to go before everyone can sit in his bathtub with
    a waterproof flat screen and browse the world's literature.

    (2) The world would benefit from a lot more manpower going into free
    software.  The volunteer efforts associated with the Free Software
    Foundation could use ten times the manpower. We need a way of paying
    for it out of public funds and for encouraging new volunteer
    efforts. Alternatively, the same result might be obtained by some
    improvement in the economics of commercial software.

    (3) The previous sections of this article have described goals for
    computing and computer science for the next 49 years. How fast these
    goals are achieved depends significantly on the attitude of the
    researchers and the agencies that support research. A large fraction
    of computer science research in industry and academia over the last 20
    years has been wasted, because the projects have had too short a time
    scale. For example, major researchers in program verification have
    spent almost all their time on verifying successive parts of
    microprocessors and have had little support for research in the
    general theory of verification.

    One result has been many academic projects and Silicon Valley
    companies pursuing essentially identical minor ideas. This is likely
    to continue during the next 49 years. Big advances will not come from
    a succession of 25 two year projects. The rush to patent has resulted
    in patenting trivial improvements.  Among the sciences computer
    science has been particularly badly off. One reads all the time about
    physics and astronomy projects with ten and twenty year completion
    times.

    What long term projects are worthwhile in computer science? For one,
    there needs to be more longterm projects to develop automatic
    interactive theorem provers, better computer algebra systems, a Lisp
    better than scheme.  It is important to choose experimental domains
    designed to be informative, rather than to emphasize short term
    goals. The geneticists have worked with Drosophila since 1910, and yet
    the fruit flies of today are no better than the fruit flies of
    1910. Couldn't the geneticists at least have bred them for speed so we
    could enjoy fruit fly races? The ``practical'' types sneer at ``toy
    problems,'' but these are the Drosophilas.

    To summarize, here some problems I see.

    (1) Humanlevel AI and how to get there;

    (2) Getting AI to where programs can learn from books;

    (3) Specifying programs that interact with people and other programs;

    (4) Making formal proofs that programs meet specifications part of
        contracts.

    (5) Giving users full control of their computing environments,
        i.e. devising ways for users to reprogram their environments without
        their having to understand more than necessary.

    (6) Giving programming languages primitives for the abstract syntax of
        the language itself.

    (7) Proving a computational analog of the Shannon channel capacity
        theorem.

    REFERENCES

    AUSTIN, J. L. 1962. How to Do Things with Words, Oxford.

    MCCARTHY, J. 1996. Elephant 2000
    http://wwwformal.stanford.edu/jmc/elephant.html
    Stanford Formal Reasoning Group, available only as
    http://wwwformal.stanford.edu/jmc/elephant.html.

    SEARLE, J. R. 1969. Speech Acts, Univ. Press, Cambridge, Eng.

    SHANAHAN, M. 1997. Solving the Frame Problem, A Mathematical
    Investigation of the Common Sense Law of Inertia. M.I.T. Press,
    Cambridge, Mass.


       收藏   分享  
    顶(0)
      




    ----------------------------------------------
    Three passions, simple but overwhelmingly strong, 
    have governed my life: the longing for love, the
    search for knowledge, and unbearable pity for the
    suffering of mankind.
                                - Bertrand Russell

    点击查看用户来源及管理<br>发贴IP:*.*.*.* 2005/3/15 14:24:00
     
     eyounx 帅哥哟,离线,有人找我吗?金牛座1982-5-3
      
      
      威望:9
      等级:大四(GRE考了1400分!)(版主)
      文章:272
      积分:1260
      门派:GOOGLEBBS.NET
      注册:2005/3/12

    姓名:(无权查看)
    城市:(无权查看)
    院校:(无权查看)
    给eyounx发送一个短消息 把eyounx加入好友 查看eyounx的个人资料 搜索eyounx在『 理论计算机科学 』的所有贴子 访问eyounx的主页 引用回复这个贴子 回复这个贴子 查看eyounx的博客2
    发贴心情 
    Mccarthy, a real scientist, is called 'father of AI'

    ----------------------------------------------
    member of LAMDA, CS, NJU
    http://lamda.nju.edu.cn/
    http://lamda.nju.edu.cn/yuy

    点击查看用户来源及管理<br>发贴IP:*.*.*.* 2005/3/15 19:40:00
     
     GoogleAdSense金牛座1982-5-3
      
      
      等级:大一新生
      文章:1
      积分:50
      门派:无门无派
      院校:未填写
      注册:2007-01-01
    给Google AdSense发送一个短消息 把Google AdSense加入好友 查看Google AdSense的个人资料 搜索Google AdSense在『 理论计算机科学 』的所有贴子 访问Google AdSense的主页 引用回复这个贴子 回复这个贴子 查看Google AdSense的博客广告
    2024/11/26 7:43:45

    本主题贴数2,分页: [1]

    管理选项修改tag | 锁定 | 解锁 | 提升 | 删除 | 移动 | 固顶 | 总固顶 | 奖励 | 惩罚 | 发布公告
    W3C Contributing Supporter! W 3 C h i n a ( since 2003 ) 旗 下 站 点
    苏ICP备05006046号《全国人大常委会关于维护互联网安全的决定》《计算机信息网络国际联网安全保护管理办法》
    93.750ms