新书推介:《语义网技术体系》
作者:瞿裕忠,胡伟,程龚
   XML论坛     W3CHINA.ORG讨论区     计算机科学论坛     SOAChina论坛     Blog     开放翻译计划     新浪微博  
 
  • 首页
  • 登录
  • 注册
  • 软件下载
  • 资料下载
  • 核心成员
  • 帮助
  •   Add to Google

    >> 本版讨论Semantic Web(语义Web,语义网或语义万维网, Web 3.0)及相关理论,如:Ontology(本体,本体论), OWL(Web Ontology Langauge,Web本体语言), Description Logic(DL, 描述逻辑),RDFa,Ontology Engineering等。
    [返回] 中文XML论坛 - 专业的XML技术讨论区W3CHINA.ORG讨论区 - Web新技术讨论『 Semantic Web(语义Web)/描述逻辑/本体 』 → [Nature上的一篇文章]From XML to RDF: how semantic web technologies will change the design of 'omic' standards by Xiaoshu Wang 查看新帖用户列表

      发表一个新主题  发表一个新投票  回复主题  (订阅本版) 您是本帖的第 3224 个阅读者浏览上一篇主题  刷新本主题   树形显示贴子 浏览下一篇主题
     * 贴子主题: [Nature上的一篇文章]From XML to RDF: how semantic web technologies will change the design of 'omic' standards by Xiaoshu Wang 举报  打印  推荐  IE收藏夹 
       本主题类别:     
     admin 帅哥哟,离线,有人找我吗?
      
      
      
      威望:9
      头衔:W3China站长
      等级:计算机硕士学位(管理员)
      文章:5255
      积分:18406
      门派:W3CHINA.ORG
      注册:2003/10/5

    姓名:(无权查看)
    城市:(无权查看)
    院校:(无权查看)
    给admin发送一个短消息 把admin加入好友 查看admin的个人资料 搜索admin在『 Semantic Web(语义Web)/描述逻辑/本体 』的所有贴子 点击这里发送电邮给admin  访问admin的主页 引用回复这个贴子 回复这个贴子 查看admin的博客楼主
    发贴心情 [Nature上的一篇文章]From XML to RDF: how semantic web technologies will change the design of 'omic' standards by Xiaoshu Wang

    Nature Biotechnology  23, 1099 - 1103 (2005)
    Published online: 7 September 2005; | doi:10.1038/nbt1139

    From XML to RDF: how semantic web technologies will change the design of 'omic' standards
    Xiaoshu Wang, Robert Gorlitsky & Jonas S Almeida

    Department of Biostatistics, Bioinformatics and Epidemiology, Medical University of South Carolina, 135 Cannon St. Suite 303, Charleston, South Carolina 29403-5720, USA.

    Correspondence should be addressed to Jonas S Almeida [URL=mailto:almeidaj@musc.edu]almeidaj@musc.edu[/URL]




    With the ongoing rapid increase in both volume and diversity of 'omic' data (genomics, transcriptomics, proteomics, and others), the development and adoption of data standards is of paramount importance to realize the promise of systems biology. A recent trend in data standard development has been to use extensible markup language (XML) as the preferred mechanism to define data representations. But as illustrated here with a few examples from proteomics data, the syntactic and document-centric XML cannot achieve the level of interoperability required by the highly dynamic and integrated bioinformatics applications. In the present article, we discuss why semantic web technologies, as recommended by the World Wide Web consortium (W3C), expand current data standard technology for biological data representation and management.


    Developing a data standard addresses two major concerns. The first is the content—what should be standardized, and the second is the methodology—how the standardization should be formatted. Most discussions about data standardization in life sciences have been directed almost exclusively to the former1, 2. But the choice of the standard technology in fact conditions not only how the data are accessed but more importantly determines whether crossdiscipline content can be merged to allow systemic integration, which is the critical issue for omic-level studies of biological organisms3, 4.

    Furthermore, a data standard is more than just a medium to uniform data representation. By laying out the overall structure of relationships of the encoded data, a data standard will effectively define a schema for a particular area of domain knowledge. In this account, a data standard resembles the basic 'form of intuition', which, in a Kantian interpretation, conditions human perception during knowledge generation5. In addition, a data standard, once accepted, will become the lingua franca for the respective community. Indeed, linguists have long postulated that language is not a mere label but the very origin of thought6. As a recent study on the numerical cognition of a Brazilian tribe has forcefully demonstrated, human cognition itself is constrained by the language formalism7. Finally, standardization in the information age has a unique characteristic in that it is often carried out prior to or in parallel with technology development8, 9, 10. As history has demonstrated that a standard developed with incomplete knowledge could indeed hamper innovation11, additional care must be taken to ensure that a designed data standard can evolve and adapt to a changing paradigm.

    The purpose of this article is therefore to discuss how the above issues affect the choice of methodologies to establish data standards. More specifically, the article aims at discussing the need and options to go beyond the currently preferred choice of XML as a standard technology12 to represent biological data.

    The limitations of XML
    To help understand the problem in detail, a hypothetical two-dimensional gel electrophoresis (2DE) gel experiment was devised (Fig. 1a) along with XML fragments for describing the location and shape of spot 2 in two markup languages—annotated gel markup language (AGML)13 and human proteome markup language (HUP-ML)14 (Fig. 1b,c). The difference between the two XML formats shows that compatibility cannot be achieved by XML alone because the language can be used in more than one way to encode the same information. It may be argued that the compatibility issue would have never occurred if the two parties had agreed on a single standard. This is undoubtedly true. But the question is: 'how can it be achieved in XML?' In any scientific discipline data relationships are bound to change with the development of new experimental methods. When such a change occurs, the standard must adjust to reflect the newly established relationship. Unfortunately, it is very difficult to define such an adaptive standard in XML. For instance, the concept of a virtual gel that underlies AGML—a catalog gel generated from a set of aligned real gels—is not supported by HUP-ML. To extend HUP-ML to support the concept would require an additional data item to be included in the original scheme. The task would appear at first to be a straightforward exercise because simply adding an optional attribute, such as "virtualGel_id", to the original <spot> construct would suffice.


    Figure 1. A hypothetical 2DE example.
      
    (a) An artificially created 2DE gel with two 'spots'. (b,c) The description of the location and shape of the second spot is shown in AGML (b) and HUP-ML (c). Note that the two XML formats differ significantly in syntax and neither schema explicates the assumed coordinate system and the axis-aligned elliptic shape of spot in a that are necessary to make the XML codes meaningful.

    Full Figure and legend (64K)


    But such a simple request demands a nontrivial solution in reality. First, the scheme of HUP-ML does not allow any extension of the vocabulary beyond the original specifications. This rigid requirement is not a design failure of HUP-ML, but instead it reflects the nature of XML. By restricting what type of data can and cannot be in what places, an XML-encoded message can be validated to ensure correct software operation. Of course, techniques such as wildcard or substitution group can be used to equip a schema with flexible extensibility. Use of these techniques, however, unrealistically requires the schema designer to anticipate all future developments of the experimental method. Even so, because no rule can be specified to restrict the manner of extension, separately developed applications are very likely to develop different 'dialect' extensions15. What is worse, because XML-based applications depend on the correct document structure to operate properly, any structural change may potentially break the applications that support the original format. Hence, a simple extension will effectively create two different standards, defeating the original purpose of using the built-in flexibility to extend a common standard. An alternative solution is to group newly extended features into a new namespace. Such an approach avoids breaking the existing schema, but the newly extended feature is unlikely to be structurally cohesive with the existing ones. The <virtualGel_id> element, for instance, must be arbitrarily placed at a location that is not obviously related to <spot>. In this case, software design, instead of scheme design, becomes an integration project, and the incompatibility remains.

    The difficulty of extending XML-based standards has prompted many standard designers to bulge their schemes to anticipate future developments. For instance, the gel-centric standards—both AGML and HUP-ML—have designed elements to accommodate the possible inclusion of mass spectrometry (MS) data. Conversely, mzXML15, a standard developed mainly for encoding MS data, has designed a variable content holder for potential 2DE data. But because these standards differ, despite overlapping with each other in design philosophy, convention, techniques and even the required contents, merging 2DE data with MS data is even harder to achieve than merging the 2DE standards.

    To say the least, even if the above integration difficulties can be overcome so that all data standards can be unified into a single markup language, the resulting schema would be of no practical use. All data are inherently related with each other. To accommodate all possible relationships, the grand scheme will eventually reach a magnitude that is simply too complex to implement.

    Where do the problems with XML originate?
    The above problem originates from the limited expressiveness of the XML language. This claim may appear to contradict the often proclaimed 'self-descriptive' nature of XML. But XML, designed as a language for message encoding, is only self-descriptive about the following structural relationships: containment, adjacency, co-occurrence, attribute and opaque reference. All these relationships "are indeed useful for serialization, but are not optimal for modeling objects of a problem domain"16. For instance, the relationship between the <spot> and <coord_*> of AGML tags is no different from that between <spot> and <dia_*>. But a computer algorithm must nevertheless treat them differently to develop meaningful applications. To calculate the distance between two <spot>s, an algorithm shall use the value of <coord_*>, but to calculate the area of each <spot>, it shall retrieve the value of <dia_*> instead. This simple example illustrates that meaningful data exchange involves two levels of communication. The first is at the message level. At this level, data must be encoded and decoded in a standard format so that applications can know how to convert electronic bits into the data objects that a programming language can work with. The second level of communication is at the algorithmic level. At this level, the relationships between data objects must be explicitly specified so that applications can process the data accordingly.

    XML is a language designed to standardize the communication at the message level. As shown in Figure 2a, the AGML schema describes a precise structural relationship between <spot> and its attributes. What appears to be missing is the description of the semantic relationships between nested content holders (Fig. 2b) that are required to invoke appropriate algorithms. Using XML alone at both levels requires a mapping of the domain knowledge to document structure. Considering that only a few types of relationships are specified in XML, the task is difficult, if not impossible, to achieve.


    Figure 2. Data relationships for a spot on a 2DE gel and its XML representation.
      
    (a) The data model of AGML schema for a 2DE spot. Only the relevant elements discussed in the text are shown. (b) The data semantics of a 2DE spot. The semantics that can be mapped to the AGML structures are shown in solid lines, whereas those that can not be mapped but are implicitly assumed in AGML are shown in dotted lines. All these semantics can be made explicit by the RDF representation as discussed in Box 1 and modeled in Figure 4.

    Full Figure and legend (8K)


    In its essence, a data interoperation problem is a communication problem and successful communication must use a language that is semantically transparent relative to what is communicated. As no single language has yet, or perhaps will ever, exist to establish the 'universal truth'17, any language is only capable of conveying a particular portion of human knowledge as machine-processible information18. The difficulties of using XML to exchange domain knowledge are therefore not so much because the language itself is flawed, which it is not12, but because it is semantically underdetermined for the topic to be communicated.

    Semantic web technologies
    What is needed for solving the above interoperability issue is a knowledge-representation technology that can explicitly describe the data semantics. Such technology—the jointly named semantic web technologies has been recently endorsed by the W3C as the technology to promote data automation and reuse in the web ([URL=http://www.w3.org/2001/sw/]http://www.w3.org/2001/sw/[/URL]).

    The foundation semantic web technology is the resource-description framework (RDF). RDF, as its name suggests, is a system to describe resources. RDF has a very simple yet elegant data model that can be summed up in one sentence: everything is a resource that connects with other resources via properties. A resource, according to the RDF primer19, "is anything that is identifiable by a uniform resource identifier (URI) reference". A property is also a resource but used to describe the relationship between resources.

    The basic information unit in RDF is an RDF statement in the form of '(subject, property, object)'. Each RDF statement can be modeled as a graph comprising two nodes connected by a directed arc (Fig. 3). A set of such graphs can jointly form a directed labeled graph (DLG) that can in theory model most, if not all, domain knowledge. For instance, the RDF graph shown in Figure 4 can be used to describe the example "spot #2".


    Figure 3. Graph model for an RDF statement.
      
    An RDF statement can be modeled as a DLG with resources (subject and object) as nodes and properties as the edge connecting from 'subject' to 'object'.

    Full Figure and legend (2K)



    Figure 4. An RDF model for a spot on a 2DE gel.
      
    The graph in solid line illustrates an RDF model for a protein spot on a gel, and the graph in dotted line shows how to extend original model with an additional statement. The namespace "cce" refers to the example ontology defined in "[URL=http://www.charlestoncore.org/ontology/example]http://www.charlestoncore.org/ontology/example[/URL]", whereas "exp" refers to the supplement ontology defined in "[URL=http://www.charlestoncore.org/ontology/supplement]http://www.charlestoncore.org/ontology/supplement[/URL]". The "...spot2" and "...gel3" are shorthand for the URI of "[URL=http://www.charlestoncore.org/ont/example/spot2]http://www.charlestoncore.org/ont/example/spot2[/URL]" and "[URL=http://www.charlestoncore.org/ont/example/gel3]http://www.charlestoncore.org/ont/example/gel3[/URL]", respectively. The nodes without labels indicate blank nodes. To keep the graph as simple as possible, not all relationships are shown such as the domain and range of all properties of the example ontology. See Box 1 for an illustration of two independent RDF documents using this model to provide information about the spot.

    Full Figure and legend (7K)


    As a graph, the RDF model is oblivious to both syntax and semantics. But an RDF model can be serialized in the syntax of either XML20, or N3 (ref. 21) or even a specialized graphical notation language such as DLG2 (ref. 22). The semantics of an RDF model, on the other hand, are obtained via reference to RDF schema language (RDFS)23 and ontology web language (OWL)24. RDFS and OWL are two other semantic web technologies. Both languages are layered on top of RDF to offer support for inference and axiom—two features that make semantic web technologies a departure from data representation toward knowledge representation25.

    Data and data standards in semantic web
    To obtain a thorough comprehension of the semantic web technology, the same information provided in the earlier XML example (Fig. 1) has been described in RDF (document 1 in Box 1). Comparing RDF with XML reveals three important differences.

    First is the use of data standards. A data standard in semantic web will be referred to as an ontology, which is a knowledge representation term defined as "a specification of a conceptualization"26 or more specifically as "an engineering artifact to describe a certain reality, plus a set of explicit assumptions regarding the intended meaning of the vocabulary words"27. An ontology in this context is a dictionary, formulated in certain syntax, to embody concepts of a domain-specific knowledge. The RDF in 'document 1' uses one such user-defined ontology ([URL=http://www.charlestoncore.org/ontology/example]http://www.charlestoncore.org/ontology/example[/URL]). Unlike the namespaces in XML, which ultimately are unique character strings for grouping related concepts, the ontology URI in RDF must be retrievable. Following the ontology defined above, for example, will lead to an RDF document, in which the concepts and usage of "Spot", "Ellipse", "Point", "shape" and "center", among others, are defined.

    The second difference of RDF is that the description of semantic relationship is explicit. Instead of using a combination of document structure and tag names to infer the shape of spot #2 as in XML, RDF explicitly states that the resource "[URL=http://www.charlestoncore.org/ont/example/spot2]http://www.charlestoncore.org/ont/example/spot2[/URL]" is a "cce:Spot", whose "cce:shape" is an "cce:Ellipse", "cce:x-radius" is 1.1067 and "cce:y-radius" is 0.6465.

    The third difference in RDF is that the unique identifier attribute used in XML is no longer needed. This is due to the fact that resource in RDF has a URI by definition. It is important to note that using a URI in RDF is fundamentally different from using unique identifiers in XML because the uniqueness of the former is ensured globally, whereas that of the latter is only guaranteed within a document. The document-centric view of XML makes it difficult to refer an embedded entity outside its XML document. For instance, how can "spot #2" be referred to outside of this article? Therefore, to make information cohesive in XML, all data have to be included within a single document. The situation in RDF is different because using URI makes the physical location of the statement irrelevant. For example, to supplement the virtual gel information about "[URL=http://www.charlestoncore.org/ont/example/spot2]http://www.charlestoncore.org/ont/example/spot2[/URL]", another RDF (document 2 in Box 1) is sufficient.

    Why can RDF be helpful to omic approaches to biology?
    Three distinct features of RDF make it very helpful to omic sciences. First, the data structure that endorsed the RDF is a DLG. Because adding nodes and edges to a DLG does not change the structure of any existing subgraph, RDF does not suffer the unpredictable extension-induced change in data structure that hampers the adaptability of the XML-based standard. Adding new information with new vocabularies to an existing resource is as easy as drawing a new node and connecting it to an existing graph (Fig. 4). Second, RDF has an open-world assumption in that it "allows anyone to make statements about any resource"28. Furthermore, RDF is monotonic in that new statements neither change nor negate the validity of previous assertions, making it particularly suitable in an academic environment, in which consensus and disagreement about the same resources have a useful coexistence that needs to be formally recorded. At last, all RDF terms share a global naming scheme in URI, making distributed data and ontologies possible.

    The combined effect of global naming, universal data structure and open-world assumption is that resources exist independently but can be readily linked with little, if any, precoordination. For instance, the RDF in "document 2" (Box 1) not only provides additional information about spot #2, but it also uses a vocabulary "([URL=http://www.charlestoncore.org/ontology/supplement#virtualGel]http://www.charlestoncore.org/ontology/supplement#virtualGel[/URL])" that was not previously defined in "[URL=http://www.charlestoncore.org/ontology/example]http://www.charlestoncore.org/ontology/example[/URL]". The decoupled nature of RDF makes it a natural choice for defining an omic standard. The essence of omic science resides in its 'holistic' description of the subject of interest, and RDF makes it possible to connect all omic-specific data as a whole without necessarily turning them into a "whole".

    Discussion
    Just as any evolving new technology, RDF is not without issues. One particular problem of RDF is the vagueness of "resource" definition. When using a universal resource locator (URL)—instead of a URI—to represent resources of multiple dimensionalities, an "identity crisis" occurs29. The philosophical argument of what a URI represents is beyond the scope of this discussion30. In practice, the problem can be conveniently avoided by using the proposed life science identifier (LSID; [URL=http://www.omg.org/cgi-bin/doc?dtc/04-05-01]http://www.omg.org/cgi-bin/doc?dtc/04-05-01[/URL]). Because LSID is designed to couple a naming scheme with a data-retrieving framework, the design decision can be deferred to the implementation stage, when the owner of the resource can decide in what dimensionality the resource will be provided, or if any at all.

    Of course, bristling alternative ontologies may emerge at the initial stage of ontological development for a particular scientific discipline. But, as a field matures, it is expected that the ontology usage will converge to the most efficient and comprehensive subset. The fact that RDF uses URI is in particular helpful in this regard. By assigning each concept a URI that can be globally referenced, RDF is immune to 'dialects'15 that vexed XML-based standards. In RDF, whether an ontology becomes a 'standard' is mostly decided by its usefulness for a community. Opting for technologies that allow elected standards not only fits the natural progression of science, but that of human language as well31.

    It should be emphasized that, originated from knowledge representation, semantic web technologies are aimed at ultimately furnishing the current web with an inferencing engine. The usefulness of ontology is nonetheless independent of the availability of such an engine. First and foremost, the use of an ontology is to provide a lexicon. In this regard, RDF, by operating at semantic level, offers a uniform data representation medium that permits system interoperability through shared ontology32.

    Although the road to this vision is yet to be cleared, the life sciences community has already started moving in this direction. For instance, the Microarray Gene Expression Data Society (MGED) has started an Ontology working group ([URL=http://mged.sourceforge.net/ontologies/index.php]http://mged.sourceforge.net/ontologies/index.php[/URL]) in an attempt to expand the concepts of MIAMI33 from MGED-OM and MGED-ML34 into RDF35. Projects have also been undertaken to express terms of Gene Ontology (GO) ([URL=http://www.geneontology.org/GO.format.shtml]http://www.geneontology.org/GO.format.shtml[/URL]) and UniProt ([URL=http://www.isb-sib.ch/~ejain/rdf/]http://www.isb-sib.ch/~ejain/rdf/[/URL]) in RDF format. Last year, W3C sponsored the first workshop on life sciences where many topics and issues have been discussed36. Supporting tools for RDF, even if still limited and unstable when compared to its XML counterparts, are increasingly available (see [URL=http://www.w3.org/RDF/]http://www.w3.org/RDF/[/URL]). What is now missing is a broader awareness of the fundamental XML conundrum and a clearer comprehension of the RDF technology among life scientists, such that they can participate more effectively in advancing the representation of their own domain expertise—a void this article hopes to assist filling.

    Note: Supplementary information is available on the Nature Biotechnology website.

    Published online: 7 September 2005.

      Top



    REFERENCES
    Quackenbush, J. Data standards for 'omic' science. Nat. Biotechnol. 22, 613−614 (2004). | Article | PubMed  | ChemPort |
    Brazma, A. On the importance of standardisation in life sciences. Bioinformatics 17, 113−114 (2001). | Article | PubMed  | ChemPort |
    Zerhouni, E. Medicine. The NIH Roadmap. Science 302, 63−72 (2003). | Article | PubMed  | ISI | ChemPort |
    Check, E. NIH 'roadmap' charts course to tackle big research issues. Nature 425, 438 (2003). | Article | PubMed  | ChemPort |
    Kant, I. Critique of Pure Reason, 2nd revised edn. (Palgrave Macmillan, New York, 2003).
    Whorf, B.L. Language, mind and reality. Theosophist 63, 281−291 (1942).
    Gordon, P. Numerical cognition without words: evidence from Amazonia. Science 306, 496−499 (2004). | Article | PubMed  | ChemPort |
    Cargill, C. Information Technology Standardization: Theory, Process And Organizations (Digital Press, Bedford, Massachusetts, 1989).
    Krechmer, K. The fundamental nature of standard: technical perspective. IEEE Commun. Mag. 38, 70 (2000).
    Sherif, M. A framework for standardization in telecommunications and information technology. IEEE Commun. Mag. 39, 94−100 (2001). | Article |
    Farrell, J. & Saloner, G. Standardization, compatibility and innovation. Rand J. Econ. 16, 70−83 (1985).
    Barillot, E. & Achard, F. XML: a lingua franca for science? Trends Biotechnol. 18, 331−333 (2000). | Article | PubMed  | ChemPort |
    Stanislaus, R., Jiang, L.H., Swartz, M., Arthur, J. & Almeida, J.S. An XML standard for the dissemination of annotated 2D gel electrophoresis data complemented with mass spectrometry results. BMC Bioinformatics 5, 9 (2004). | Article | PubMed  |
    Kamijo, A. et al. HUP-ML: Human proteome markup language for proteomics database. JMSSJ On-line 51, 542−549 (2003). [URL=http://db.wdc-jp.com/mssj/search/abst/200305/ms510542.html]http://db.wdc-jp.com/mssj/search/abst/200305/ms510542.html[/URL]
    Pedrioli, P.G. et al. A common open representation of mass spectrometry data and its application to proteomics research. Nat. Biotechnol. 22, 1459−1466 (2004). | Article | PubMed  | ChemPort |
    Cover, R. XML and semantic transparency. The Cover Pages, published online 23 October 1998, revised 24 November 1998. [URL=http://www.oasis-open.org/cover/xmlAndSemantics.html]http://www.oasis-open.org/cover/xmlAndSemantics.html[/URL]
    Spender, J. Pluralist epistemology and the knowledge-based theory of the firm. Organ. 5, 233−256 (1998).
    Galliers, R.D. & Newwell, S. Back to the future: from knowledge management to data management. in Proceedings of the 9th European Conference on Information Systems 2001, Bled, Slovenia, June 27−29, 2001, 609−615 (Moderna Organizacija, Kranj, Slovenia, 2001).
    Manola, F. & Miller, E. RDF Primer. W3C Recommendation published online 10 February 2004. [URL=http://www.w3.org/TR/rdf-primer/]http://www.w3.org/TR/rdf-primer/[/URL]
    Beckett, D. RDF/XML Syntax Specification (Revised). W3C recommendation published online 10 February 2004. [URL=http://www.w3.org/TR/2004/REC-rdf-syntax-grammar-20040210/]http://www.w3.org/TR/2004/REC-rdf-syntax-grammar-20040210/[/URL]
    Berners-Lee, T. Primer: Getting into RDF & Semantic Web using N3. Published online 29 June 2005. [URL=http://www.w3.org/2000/10/swap/Primer.html]http://www.w3.org/2000/10/swap/Primer.html[/URL]
    Wang, X. & Almeida, J.S. DLG2 - A Graphical Presentation Language for RDF and OWL (v 2.0). Published online 10 August 2005. [URL=http://www.charlestoncore.org/dlg2/]http://www.charlestoncore.org/dlg2/[/URL]
    Brickley, D. RDF Vocabulary Description Language 1.0: RDF Schema. W3C recommendation published online 10 February 2004. [URL=http://www.w3.org/TR/rdf-schema/]http://www.w3.org/TR/rdf-schema/[/URL]
    McGuinness, D.L. & van Harmelen, F. OWL Web Ontology LanguageOverview. W3C recommendation published online 10 February 2004. [URL=http://www.w3.org/TR/owl-features/]http://www.w3.org/TR/owl-features/[/URL]
    Davis, R., Shrobe, H. & Szolovits, P. What is a knowledge representation? AI Magazine 14, 17−33 (1993).
    Gruber, T. A translation approach to portable ontologies. Knowledge Acquisition 5, 199−220 (1993). | Article | ISI |
    Guarino, N. Formal Ontology and Information Systems, in: Formal Ontology in Information Systems (IOS Press, Amsterdam, Netherlands, 1998).
    Klyne, G. & Carroll, J.J. (eds.) Resource Description Framework (RDF):Concepts and Abstract Syntax. W3C recommendation published 10 February 2004. [URL=http://www.w3.org/TR/rdf-concepts/]http://www.w3.org/TR/rdf-concepts/[/URL]
    Clark, K.G. Identity crisis. XML.com. Published online 11 September 2002. [URL=http://www.xml.com/pub/a/2002/09/11/deviant.html]http://www.xml.com/pub/a/2002/09/11/deviant.html[/URL]
    Berners-Lee, T. What do HTTP URIs identify? Published online 27 July 2002. [URL=http://www.w3.org/DesignIssues/HTTP-URI]http://www.w3.org/DesignIssues/HTTP-URI[/URL]
    Sole, R. Language: syntax for free? Nature 434, 289 (2005). | Article | PubMed  | ChemPort |
    Berners-Lee, T. & Hendler, J. Publishing on the semantic web. Nature 410, 1023−1024 (2001). | Article | PubMed  | ChemPort |
    Brazma, A. et al. Minimum information about a microarray experiment (MIAME)-toward standards for microarray data. Nat. Genet. 29, 365−371 (2001). | Article | PubMed  | ISI | ChemPort |
    Spellman, P.T. et al. Design and implementation of microarray gene expression markup language (MAGE-ML). Genome Biol. 3, research0046 (2002). | Article | PubMed  | ChemPort |
    Stoeckert, C.J. Jr., Quackenbush, J., Brazma, A. & Ball, C.A. Minimum information about a functional genomics experiment: the state of microarray standards and their extension to other technologies. Drug Discov. Today: TARGETS 3, 159−164 (2004). | ChemPort |
    Summary Report: W3C Workshop on Semantic Web for Life Sciences. Published online 22 November 2004. ([URL=http://www.w3.org/2004/10/swls-workshop-report.html]http://www.w3.org/2004/10/swls-workshop-report.html[/URL]




    Acknowledgments
    This work was supported by the US National Heart, Lung, and Blood Institute (NHLBI) Proteomics Initiative through contract N01-HV-28181 to the Medical University of South Carolina, Principal Investigator. D. Knapp, and its bioinformatics core (core C, Principal Investigator J.S. Almeida) and mathematical modeling project (project 7, Principal Investigator E.O. Voit), as well as by its administrative center, separately funded by the same initiative to the same institution, Principal Investigator M.P. Schachte. The authors also acknowledge support by the training grant 1-T15-LM07438-01 "training of toolmakers for Biomedical Informatics" by the US National Library of Medicine of the National Institutes of Health, (NIH/NLM).


    Competing interests statement:  The authors declare that they have no competing financial interests.


       收藏   分享  
    顶(0)
      




    ----------------------------------------------

    -----------------------------------------------

    第十二章第一节《用ROR创建面向资源的服务》
    第十二章第二节《用Restlet创建面向资源的服务》
    第三章《REST式服务有什么不同》
    InfoQ SOA首席编辑胡键评《RESTful Web Services中文版》
    [InfoQ文章]解答有关REST的十点疑惑

    点击查看用户来源及管理<br>发贴IP:*.*.*.* 2007/3/14 10:31:00
     
     GoogleAdSense
      
      
      等级:大一新生
      文章:1
      积分:50
      门派:无门无派
      院校:未填写
      注册:2007-01-01
    给Google AdSense发送一个短消息 把Google AdSense加入好友 查看Google AdSense的个人资料 搜索Google AdSense在『 Semantic Web(语义Web)/描述逻辑/本体 』的所有贴子 点击这里发送电邮给Google AdSense  访问Google AdSense的主页 引用回复这个贴子 回复这个贴子 查看Google AdSense的博客广告
    2024/5/12 17:53:30

    本主题贴数1,分页: [1]

    管理选项修改tag | 锁定 | 解锁 | 提升 | 删除 | 移动 | 固顶 | 总固顶 | 奖励 | 惩罚 | 发布公告
    W3C Contributing Supporter! W 3 C h i n a ( since 2003 ) 旗 下 站 点
    苏ICP备05006046号《全国人大常委会关于维护互联网安全的决定》《计算机信息网络国际联网安全保护管理办法》
    89.844ms