The Semantic WebSubject Index - Links, References and Notes |
Home Page | Business Rules | Rule-Based Systems | Rule Engines | Under the Hood |
Note: This section contains more or less permanent links
( at least I hope they are permanent, let me know if they are not ).
There an is interesting organization in Europe, the Semantic Web Services Initiative (SWSI), funded by DARPA and the EU. Their stated objective is to:
... create infrastructure that combines Semantic Web and Web Services technologies to enable maximal automation and dynamism in all aspects of Web service provision and use, including (but not limited to) discovery, selection, composition, negotiation, invocation, monitoring and recovery;
They say something else about "Intelligent Web Services" that's very interesting, very telling in a way. It's worthy of an extended quote ( reformatted a bit ).
Any enterprise requiring a business interaction with another enterprise can automatically discover and select the appropriate optimal Web services relying on selection policies. Services can be invoked automatically and payment processes can be initiated. Any necessary mediation would be applied based on data and process ontologies and the automatic translation and semantic interoperation.
An example would be supply chain relationships where an enterprise manufacturing short-lived goods must frequently seek suppliers as well as buyers dynamically. Instead of employees constantly searching for suppliers and buyers, the Web service infrastructure does it automatically within the defined constraints. Other applications areas for this technology are Enterprise-Application Integration (EAI), eWork, and Knowledge Management.
Note how every effort of their approach to the Semantic Web is directed at process automation, that is getting people out of the loop. This is quite a different objective than our own, which is to facilitate people sharing knowledge, in other words getting people into the loop. There are at least two very distinct and to some extent mutually contradictory definitions of the term 'Semantic Web'.
On the other hand, the Open Source versions of the products being developed for highly automated business applications will probably be the cornerstone of the 'knowledge sharing' on the Semantic Web. So there's good in it after all.
The following is an extended quote of the definition provided by the W3's Semantic Web Project.
The Semantic Web is a web of data. There is lots of data we all use every day, and its not part of the web. I can see my bank statements on the web, and my photographs, and I can see my appointments in a calendar. But can I see my photos in a calendar to see what I was doing when I took them? Can I see bank statement lines in a calendar?
Why not? Because we don't have a web of data. Because data is controlled by applications, and each application keeps it to itself.
The Semantic Web is about two things. It is about common formats for interchange of data, where on the original Web we only had interchange of documents. Also it is about language for recording how the data relates to real world objects. That allows a person, or a machine, to start off in one database, and then move through an unending set of databases which are connected not by wires but by being about the same thing.
The Semantic Web provides a common framework that allows data to be shared and reused across application, enterprise, and community boundaries. It is a collaborative effort led by W3C with participation from a large number of researchers and industrial partners. It is based on the Resource Description Framework (RDF).
From W3's Semantic Web FAQS - What is the Semantic Web?
How would you define the main goals of the Semantic Web?
The Semantic Web is an extension of the current Web better enabling computers and people to work in cooperation.
The Semantic Web will allow two things. Firstly, it will allow data to be surfaced in the form of data, so that a program doesn't have to strip the formatting and pictures and ads off a Web page and guess where the data on it is. Secondly, it will allow people to write (or generate) files which explain—to a machine—the relationship between different sets of data. For example, one will be able to make a "semantic link" between a database with a “zip-code” column and a form with a “zip” field that they actually mean the same – they are the same abstract concept. This will allow machines to follow links and hence automatically integrate data from many different sources.
Semantic Web technologies can be used in a variety of application areas; for example: in data integration, whereby data in various locations and various formats can be integrated in one, seamless application; in resource discovery and classification to provide better, domain specific search engine capabilities; in cataloging for describing the content and content relationships available at a particular Web site, page, or digital library; by intelligent software agents to facilitate knowledge sharing and exchange; in content rating; in describing collections of pages that represent a single logical “document”; for describing intellectual property rights of Web pages (see, eg, the Creative Commons), and in many others.
There is an interesting organization called Semantically-Interlinked Online Communities. They seem to be more aware of the social possibilities for the Semantic Web. Note that their ontology is basically a list of all the things on the web they recognize and the relationships between things. Can I understand it ? Can I convert it to something else ? Can information from different sources be combined in useful ways ? This is how they describe their mission.
SIOC provides methods for interconnecting discussion methods such as blogs, forums and mailing lists to each other. It consists of the SIOC ontology, an open-standard machine readable format for expressing the information contained both explicitly and implicitly in internet discussion methods, of SIOC metadata producers for a number of popular blogging platforms and content management systems, and of storage and browsing / searching systems for leveraging this SIOC data.
Obviously, there us still a distance to go before the Semantic Web is a reality, but new foundations are being built upon the old, and that often makes for the most stable and powerful technology in the long run.
For a more European perspective, http://www.neuroinf.de/Miscellaneous/SWglossary
And RoW2006 - Reasoning on the Web, http://www.aifb.uni-karlsruhe.de/WBS/phi/RoW06/
http://knowledgeweb.semanticweb.org/
Knowledge Web (KW) is a 4 year Network of Excellence project funded by the European Commission 6th Framework Programme. Knowledge Web began on January 1st, 2004. Supporting the transition process of Ontology technology from Academia to Industry is the main and major goal of Knowledge Web.
The mission of KnowledgeWeb is to strengthen the European industry and service providers in one of the most important areas of current computer technology: Semantic Web enabled E-work and E-commerce.
http://knowledgeweb.semanticweb.org/benchmarking_interoperability/owl/
Benchmarking the interoperability of Ontology Development Tools using OWL as interchange language
As stated in the main section for the Semantic Web, the Wikipedia defines a software agent as:
... an abstraction, a logical model that describes software that acts for a user or other program in a relationship of agency[1]. Such "action on behalf of" implies the authority to decide when (and if) action is appropriate. The idea is that agents are not strictly invoked for a task, but activate themselves.
The definition continues with the different types of agents including ( slightly reformatted for clarity ):
Intelligent agents (in particular exhibiting some aspect of Artificial Intelligence, such as learning and reasoning),
Multi-agent systems (distributed agents that do not have the capabilities to achieve an objective alone and thus must communicate),
Autonomous agents (capable of modifying the way in which they achieve their objectives),
Distributed agents (being executed on physically distinct machines),
Mobile agents (agents that can relocate their execution onto different processors).
Two types of agent, distributed and mobile agents, are classified according to their operating environments and, while important, are of marginal interest to a rule-based approach to the Semantic Web.
However, three of these agents, that is intelligent, multi-agent and autonomous agents, have the ability to learn, reason and communicate. In other words, their value is based on their ability to express and use knowledge and they must have extensive rule processing capabilities in order to make their knowledge work.
Is it an Agent, or just a Program?:
A Taxonomy for Autonomous Agents
http://www.msci.memphis.edu/~franklin/AgentProg.html
From the Haley Forum on the subject of ontology:
Question: What is an ontology and why is it useful?
Answer: An ontology is sort of a theory of what exists... in practice you typically see a list of kinds of things, starting out abstract (concept, entity, relation, number) and getting more concrete. An ontology gives people (or software agents) a shared set of definitions so they can communicate with clarity and consistency about the types of things mentioned in the ontology.
For a heavy-weight Artificial Intelligence model of learning in the Semantic Web, see Bootstrapping knowledge representations: from entailment meshes via semantic nets to learning webs by Francis Heylighen. Note that it was written in 1997, close to 10 years ago. Advanced conceptual models encountered in AI can sit for a decade or more before it becomes feasible to implement them.
This sort of model may not seem appropriate for a 'simple' interface to the Semantic Web, but, in fact, it may require powerful models of this sort to simplify the user interface enough for use by ordinary mortals. Modern software which appears to be simple on the surface is often the most complex.
The symbol-based, correspondence epistemology used in AI is contrasted with the constructivist, coherence epistemology promoted by cybernetics. The latter leads to bootstrapping knowledge representations, in which different parts of the cognitive system mutually support each other. Gordon Pask's entailment meshes and their implementation in the ThoughtSticker program are reviewed as a basic application of this methodology.
Entailment meshes are then extended to entailment nets: directed graph representations governed by the "bootstrapping axiom", determining which concepts are to be distinguished or merged. This allows a constant restructuring and elicitation of the conceptual network. Semantic networks and frame-like representations with inheritance can be expressed in this very general scheme by introducing a basic ontology of node and link types. Entailment nets are then generalized to associative nets characterized by weighted links. Learning algorithms are presented which can adapt the link strengths, based on the frequency with which links are selected by hypertext browsers.
It is argued that these different bootstrapping methods could be applied to make the World-Wide Web more intelligent, by allowing it to self-organize and support inferences through spreading activation.
The Dark Side of the Semantic Web
Themes and metaphors in the semantic web
discussion.
Tim Berners-Lee's Semantic Web Roadmap,
dated September 1998.
Wikipedia Reference for Software Agents, labeled [1], Artificial Intelligence: A Modern Approach (2nd Edition) by Stuart J. Russell & Peter Norvig, (2002) Prentice Hall, ISBN 0-13-790395-2
In the mid-1990s, the Department of Information and Software Engineering at George Mason University
developed a very interesting architecture for the Advanced Research Projects Agency.
Despite being over a decade old, it is still a very good foundation for
understanding the components required for large-scale integration of
information.
http://ise.gmu.edu/I3_Arch/index.html
Family 1. Coordination Services
These services provide ad hoc or automated support for programming I^3 configurations. This might include locating the I^3 tools and information sources potentially relevant to an information intensive task, creating a template which is used to construct an I^3 Configuration, and driving the execution of the task using that configuration. Coordination Services use other services to perform these functions; Coordination Services direct their activities. To a software engineer, Coordination Services can be viewed as supporting a specific paradigm for constructing and running process programs or component-based programs.
Primary Coordination Services are Dynamic Tool Selection and Invocation (Brokering), Dynamic Configuration Construction (Facilitation), Static Configuration Construction (Matchmaking), and Ad Hoc Configuration Construction Services.
Family 2. Management Services
Management Services are used by Coordination Services to locate useful information sources and tools, to explain their capabilities, and to and create and interpret templates. Templates are data structures that specify I^3 Configurations.
A template might describe the services, information sources, and steps that might perform a complete I^3 task, or it might describe how a small subtask is to be performed. The latter would arise in cases where a Dynamic Configuration Construction Service accomplishes an I^3 task by iteratively working on subtasks.
The primary Management Services are Resource Discovery, Configuration Process Primitives, and Template Interpretation and Execution Services.
Family 3. Semantic Integration and Transformation (SIT) Services
These services support the semantic manipulations needed when integrating and transforming information to satisfy an I^3 task, as well as the capabilities needed to re-use program components. In the first case, the typical input to such services would be one or more specific (possibly wrapped) information sources, and the typical output would be an integrated and/or transformed view of this information. In the second case, typical input to such services would be one or more (possibly wrapped) software components, and the typical output would be a re-configured composite of these components.
The primary SIT services are Schema Integration, Information Integration, Process Integration Support, Physical Integration Support, and Component Programming Services.
Family 4. Functional Extension Services
These services augment the functionalities of other I^3 Services. They are used mostly by services that manipulate information sources, i.e., Wrapping, SIT, and Coordination Services. For the most part, Functional Extension Services enrich the semantic expressiveness of information sources.
The primary Functional Extension Services include Activeness, Inference, Multistate, Temporal, Object-Orientation, and Persistence Services.
Family 5. Wrapping Services
These services are used to make information sources comply with an internal or external standard. This standard may involve the interface to the information source or the behavior of the information source. Some wrappers simply transform the output or interface of an information source or software component. Other wrappers modify the meaning or behavior of information sources or software components; this may involve creating new internal interfaces, or even exposing internal interfaces to services using the wrapped information source/component.
The primary Wrapping Services are Communication Wrapping, Data Restructuring Wrapping, and Behavioral Transformation Wrapping Services.
From the OWL Guide,
The World Wide Web as it is currently constituted resembles a poorly mapped geography. Our insight into the documents and capabilities available are based on keyword searches, abetted by clever use of document connectivity and usage patterns. The sheer mass of this data is unmanageable without powerful tool support. In order to map this terrain more precisely, computational agents require machine-readable descriptions of the content and capabilities of Web accessible resources. These descriptions must be in addition to the human-readable versions of that information.
The OWL Web Ontology Language is intended to provide a language that can be used to describe the classes and relations between them that are inherent in Web documents and applications.
A pivotal white paper written by Tim Berners-Lee, one of the authors of the Scientific American article on the Semantic Web. It specifically addresses the subject of Object-Oriented programming and the Web, mentioning the need for a "format negotiation language" and "standard data language".
It is not insignificant that Tim Berners-Lee is now the Director of W3, official source of the standard language for the Semantic Web, that is RDF and OWL.
Format negotiation allows data formats to introduced smoothly over time, and the URI format allows new protocols to be introduced over time. The web puts as few constraints as possible on anything, defining the minimum amount for interoperability. Applications can interact to the limit of the concepts which they share. Can we define a structure for global OOP which will allow all current, and curretly inconcievable future, OO systems to interoperate limited only by their common concepts?
... I propose the following general requirements.The ability to interogate a remote object as to the interfaces which it supports, with a format negotiation in which the response is sent.
The support for an interface (OO protocol) as a first class object on the Web.
A way of finding suitable formats for the submission of parameters for remote operations, or a standard data syntax.
... Note
The work to allow WWW to be used for OO is intimately linked with the ideas of using OO for WWW. If new W3C protocol designs are to be based on machine readable interface definitions (eg HTTP-NG), then there is a lot of common work here.
Bad spelling, great thinking. :-)
From the Semantic Web Best Practices and Deployment (SWBPD) Working Group
The aim of this Semantic Web Best Practices and Deployment (SWBPD) Working Group is to provide hands-on support for developers of Semantic Web applications. With the publication of the revised RDF [ as of February 2004 ] and the new OWL specification [ we expect a large number of new application developers.
The SWBPD developed a A Semantic Web Primer for Object-Oriented Software Developers.
The key to understanding ontology-driven architectures is to keep in mind that in ontology languages:
* Properties are independent from specific classes
* Instances can have multiple types and change their type as a result of classification
* Classes can be defined dynamically, at runtime
These key differences imply that it is not sufficient to simply map OWL/RDF Schema classes into OO classes, where attributes are fixed to classes etc. Instead, if an application wants to exploit the weak typing and flexibility of OWL/RDF Schema, it is necessary to map OWL/RDF Schema classes into runtime objects, so that classes in the ontology become instances of some object-oriented class (compare also [G 2003] or [KPBP 2004]). As illustrated in As illustrated in Figure 4, a typical object model to represent Semantic Web ontologies would contain classes to represent resources, classes, properties and individuals. Note that the terms RDFSClass and RDFProperty relate to the classes rdfs:Class and rdf:Property defined in RDF Schema, whereas the term RDFIndividual has no direct counterpart defined in RDF Schema.
Further extensions for the various types of OWL classes and properties are easy to imagine (see Protege OWL Diagram for a complete OWL object model).
Applications would load ontologies into such an object model and then manipulate and query the objects at runtime. Since OWL/RDF Schema classes are objects, it is possible to add and modify classes, for example to change the logical characteristics of an ontology at runtime. Since RDF properties are objects (and their values are not stored as object-oriented attributes), it is possible to assign and query values for any resource dynamically. Since individuals are objects, it is possible to dynamically change their type.
This approach is driven by a dynamic approach to development as is known in mainstream software technology as the Dynamic Object Model pattern [RTJ 2005]. For certain object-oriented systems, by representing the object types as objects, they can be changed at configuration time or at runtime, making it easy to change and adapt the system to new requirements.
Ontoworld is a valuable Semantic Web resource. They have a 'Semantic' Wiki based on a version of the same technology supporting the Wikipedia, called Semantic MediaWiki.
The Semantic MediaWiki is an extension to the MediaWiki software (which also runs Wikipedia), which allows every user to make information more accessible to computer programs (including the ask query and the triple search available in SMW itself), which in turn makes it easier for humans to search or further use this information.
Semantic MediaWiki offers two means to make information about a page more explicit:
* Categorization of links (relations between pages)
* Typed attributes (of a page)
As an example, the page Berlin might say, that it is the capital of Germany. Now, in the Semantic MediaWiki, users can type the link, thus making the relation (capital of) between Berlin and Germany explicit.
On the page Berlin, the syntax would be
... [[capital of::Germany]] ...
resulting in the semantic statement "Berlin" "capital of" "Germany".
On the page Berlin, it might say the population is 3,393,933. In the Semantic MediaWiki, users can make this information explicit, by writing
... the population is [[population:=3,993,933]] ...
resulting in the semantic statement "Berlin" "has population" "3993933".
If the same sort of information in the Wikipedia could be typed and categorized according to semantic extensions, it would represent a vast expansion of the capabilities of the wiki.
Ontoworld has has an excellent, but by no means exhaustive, overview of ontolgies, tools and technologies available for the Semantic Web, that is "a collection of relevant efforts around enabling the Semantic Web in its true web sense".
A very interesting application among RDF inference engines is RAP - RDF API for PHP is a Semantic Web toolkit for PHP developers.
RAP started as an open source project at the Freie Universität Berlin in 2002 and has been extended with internal and external code contributions since then. Its latest release includes:
- a statement-centric API for manipulating RDF graphs as a set of statements
- a resource-centric API for manipulating RDF graphs as a set of resources
- integrated RDF/XML, N3 and N-TRIPLE parsers
- integrated RDF/XML, N3 and N-TRIPLE serializers
- in-memory or database model storage
- support for the RDQL query language
- an inference engine supporting RDF-Schema reasoning and some OWL entailments
- an RDF server providing similar functionality as the Joseki RDF server
- a graphical user-interface for managing database-backed RDF models
- support for common vocabularies
RAP can be used under the terms of the GNU LESSER GENERAL PUBLIC LICENSE (LGPL) and can be downloaded from http://sourceforge.net/projects/rdfapi-php/.
This may be the leading contender among Open Source RDF inference engines on the server side, largely due to its implementation in PHP, which is a far more generic, simple and undemanding environment than Java server environments, especially on the server-side where they can consume the capacity of a large machine under a moderate load. Java-run times on clients may be mixed with PHP servers via AJAX and such glue technologies.
The implementation details probably need to wait for the "Rule Engines" section.
Another interesting RDF inference engine is the closed, proprietary and non-free RDF Gateway by Intellidimension. They have very good demos of the server. In fact the demos are good enough to start raising real world issues about the challenges and complexities of maintaining an 'ontology server' and perennial issues of server performance under load, etc. They have a free download of a client-side version of the product to experiment with. But in any case, it seems to be very serviceable and sophisticated, and has excellent documentation - worth keeping an eye upon.
Is it outright hubris or just too many refreshments at the panel reception ?
http://iswc2006.semanticweb.org/program/webpanel.php
The Role of Semantic Web in Web 2.0: Partner or Follower?, coordinated by Mark Greaves, Vulcan.
Panel Chair:
* Mark Greaves, Vulcan Inc.
Currently, the web phenomenon that is driving the best developers and captivating the best entrepreneurs is Web 2.0. Web 2.0 encompasses some of today's most exciting web-based applications: mashups, blogs/wikis/feeds, interface remixes, and social networking/tagging systems. Although most Web 2.0 applications rely on an implicit, lightweight, shared semantics in order to deliver user value, by several metrics (number of startups funded, number of "hype" articles in the trade press, number of conferences), Web 2.0 technologies are significantly outdistancing semweb technologies in both implementation and mindshare. Hackers are staying up late building mashups with AJAX and REST and microformats, and only rarely including RDF and OWL. This panel will consider whether semantic web technology has a role in Web 2.0 applications, in at least the context of the following areas:
1. Web 2.0 and Semantics: What unique value can semantic web technologies supply to Web 2.0 application areas? How do semantic web technologies match up with the semantic demands of Web 2.0 applications?
2. Semantics and Web "Ecosystems": Web 2.0 applications often strive to build participatory ecosystems of content that is supplied and curated by their users. Can these users effectively create, maintain, map between, and use RDF/OWL content in a way that reinforces the ecosystem?
3. Semantic Web in Practice: Does semantic web technology enable the cost-effective creation of Web 2.0 applications that are simple, scalable, and compelling for a targeted user community? Can semantic web technology genuinely strengthen Web 2.0 applications, or will it just be a footnote to the Web 2.0 wave?
O'Reilly has a newer, more focused definition of the term "Web 2.0":
Web 2.0 is the business revolution in the computer industry caused by the move to the internet as platform, and an attempt to understand the rules for success on that new platform. Chief among those rules is this: Build applications that harness network effects to get better the more people use them. (This is what I've elsewhere called "harnessing collective intelligence.")
Mindswap is the first OWL-powered Semantic Web site to proclaim itself as such. It fits the "Wiki on Steroids" model of the Semantic Web ( which is good ).
With the advent of the DARPA Agent Markup Langauge (DAML) program and the Semantic Web Activity at the World Wide Web Consortium (W3C) (and its predecessors), the "Semantic Web" (a term coined by Tim Berners-Lee) has been gaining widespread attention. In November, 2001, the W3C created the Web Ontology Working Group, which has developed the Web Ontology Language OWL, which is used in powering this web site -- making it the first "Owl-compliant" web site to date.
Abstract from "Swoogle: A Search and Metadata Engine for the Semantic Web"
Swoogle is a crawler-based indexing and retrieval system for the Semantic Web. It extracts metadata for each discovered document, and computes relations between documents. Discovered documents are also indexed by an information retrieval system which can use either character N-Gram or URIrefs as keywords to find relevant documents and to compute the similarity among a set of documents. One of the interesting properties we compute is textit {ontology rank}, a measure of the importance of a Semantic Web document.
http://www.hakia.com/about.html
hakia is building the Web's new "meaning-based" search engine with the sole purpose of improving search relevancy and interactivity, pushing the current boundaries of Web search. The benefits to the end user are search efficiency, richness of information, and time savings.
The basic promise is to bring search results by meaning match - similar to the human brain's cognitive skills - rather than by the mere occurrence (or popularity) of search terms. hakia's new technology is a radical departure from the conventional indexing approach, because indexing has severe limitations to handle full-scale semantic search.
hakia's capabilities will appeal to all Web searchers - especially those engaged in research on knowledge intensive subjects, such as medicine, law, finance, science, and literature.
Some very interesting and relevant comments from Jeff Heflin ( et al ) at Lehigh University's Semantic Web and Agent Technologies Lab.
An extended quote from A Model Theoretic Semantics for Distributed Ontologies that Accounts for Versioning [ PDF, 224K ]:
The Semantic Web (Berners-Lee, Hendler, and Lassila 2001)[1] has been proposed as the key to unlocking the Web’s potential. The basic idea is that information is given explicit meaning, so that machines can process it more intelligently. Instead of just creating standard terms for concepts as is done in XML, the Semantic Web also allows users to provide formal definitions for the standard terms they create. Machines can then use inference algorithms to reason about the terms and to perform translations between different sets of terms. It is envisioned that the Semantic Web will enable more intelligent search, electronic personal assistants, more efficient e-commerce, and coordination of heterogeneous embedded systems.
Unfortunately, the Semantic Web currently lacks a strong underlying theory that considers its distributed aspects. To date, the semantics for semantic web languages have looked little different from the traditional semantics of knowledge representation languages. Traditional knowledge bases assume a single consistent point-of-view, but the knowledge of the Semantic Web will be the product of millions of autonomous parties and may represent many different viewpoints. We argue that the Semantic Web is not just AI knowledge representation using an XML syntax, but actually changes the way we should think about knowledge representation. Semantic web knowledge bases must deal with an additional level of abstraction, that of the document or resource that contains assertions and formulas. Furthermore, the semantics of the knowledge base must explicitly account for the different types of links that can exist between documents.
Although languages such as RDF and OWL currently give a definitive account of the meaning of any single document, things become more ambiguous when you consider how documents should be combined. In this respect, semantic web systems are in a state analogous to the early days of semantic nets. A quote from Brachman [3] about links between concepts in early semantic nets seems just as appropriate for “links” between semantic web documents today: . . . the meaning of the link was often relegated to “what the code does with it” - neither an appropriate notion of semantics nor a useful guide for figuring out what the link, in fact means.
Without a better understanding of inter-document links on the Semantic Web, we will have serious interoperability problems.
First of all, it is important to understand that RDF ontologies have fairly strict definition and none of the current medical 'ontologies' meets those requirements.
Nonetheless, fascinating things are going on at the National Library of Medicine, mostly notably the Unified Medical Language System (UMLS). The purpose of UMLS is to "facilitate the development of computer systems that behave as if they "understand" the meaning of the language of biomedicine and health".
They have developed a set of categorizations and relationships called the Semantic Network. In describing the structure and content of the UMLS Semantic Network, it says:
The scope of the UMLS Semantic Network is broad, allowing for the semantic categorization of a wide range of terminology in multiple domains. Major groupings of semantic types include organisms, anatomical structures, biologic function, chemicals, events, physical objects, and concepts or ideas. The links between the semantic types provide the structure for the network and represent important relationships in the biomedical domain. The primary link between the semantic types is the 'isa' link. The 'isa' link establishes the hierarchy of types within the Network and is used for deciding on the most specific semantic type available for assignment to a Metathesaurus concept. There is also a set of non-hierarchical relationships, which are grouped into five major categories: `physically related to, ' `spatially related to,' `temporally related to,' `functionally related to,' and `conceptually related to.'
The information associated with each semantic type includes a unique identifier, a tree number indicating its position in an `isa' hierarchy, a definition, and its immediate parent and children. The information associated with each relationship includes a unique identifier, a tree number, a definition, and the set of semantic types that can plausibly be linked by this relationship.
It is pretty clear that the 'semantic types' can be mapped to non-medical domains in an Extended Zachman Framework.
Another fascinating ontology is
SNOMED Clinical Terms. SNOMED CT is "a dynamic, scientifically validated clinical health care terminology and infrastructure that makes health care knowledge more usable and accessible. The SNOMED CT Core terminology provides a common language that enables a consistent way of capturing, sharing and aggregating health data across specialties and sites of
care".
If anything, the SNOMED conceptual framework is even more powerful than UMLS,
encompassing elaborate defaulting logic, 'no finding' assertions, categorical
subsumption, episodic temporal patterns and on and on. They have even
developed some generic task models for medical diagnosis. However, there
are legitimate questions about the openness of the 'standard' - it may not be
free to use, which is like having no standard at
all.
In any case, the 'ontologies' mentioned are powerful tools for knowledge engineering and will certainly be as major area of interest for individuals embarking on the Semantic Web.
http://www.mindswap.org/~katz/pychinko/
Pychinko: Rete-based RDF friendly rule engine
Also known as a CWM clone
Bijan Parsia, Yarden Katz, and Kendall Clark
What is Pychinko?
Pychinko is a Python implementation of the classic Rete algorithm (see Forgy82 for
original report.) Rete (and its since improved variants) has shown to be, in many
cases, the most efficient way to apply rules to a set of facts--the basic
functionality of an expert system. Pychinko employs an optimized implemention of
the algorithm to handle facts, expressed as triples, and process them using a set
of N3 rules.
We've tried to closely mimic the features available in CWM, as it is
one of the most widely used rule engines in the RDF community. Several benchmarks
have shown our Rete-based Pychinko to be upto 5x faster than the naive rule
application used in CWM (see presentation below for preliminary results.) A
typical use case for Pychinko might be applying the RDFS inference rules,
available in N3, to a document. Similar rules are available for XSD and a dialect
of OWL.
Mention iof FuXi, from the Pychinko page
Projects using Pychinko
FuXi: "FuXi (pronounced foo-see) is a forward-chaining, rule-based system that
expands 4RDF to include reasoning capabilities via interpretation of explicit
implications writen in Notation 3 and persisted in the Model under a named sub
graph (scope). Fuxi uses Pychinko to match and fire the specified rules. It also
includes a Versa function that allows Versa query expressions to be executed
within a model (scoped or not) extended to include statements inferred by rules in
a particular scope
FuXi is qn interesting application of Pychinko,
http://copia.ogbuji.net/blog/2005-05-29/FuXi
FuXi - Versa / N3 / Rete Expert System
Pychinko is a python implementation of the classic Rete algorithm which provides the inferencing capabilities needed by an Expert System. Part of Pychinko works ontop of cwm / afon out of the box. However, it's Interpreter only relies on rdflib to formally represent the terms of an RDF statement.
FuXi only relies on Pychinko itself, the N3 deserializer for persisting N3 rules, and rdflib's Literal and UriRef to formally represent the corresponding terms of a Pychinko Fact.
http://infomesh.net/2001/cwm/
Closed World Machine
CWM - Closed World Machine
CWM is a popular Semantic Web program that can do the following tasks:-
Parse and pretty-print the following RDF formats: XML RDF, Notation3, and NTriples
Store triples in a queryable triples database
Perform inferences as a forward chaining FOPL inference engine
Perform builtin functions such as comparing strings, retrieving resources, all using an extensible builtins suite
...
... The Modules
You should get the following modules and put them into a single directory:-
converter-cgi.py - A CGI interface (non-essential)
cwm.py - CWM itself, as an interface to all of the modules
cwm_crypto.py - CWM Cryptographic builtins
cwm_math.py - CWM Mathematical builtins (cf. the math module in Python)
cwm_os.py - CWM OS builtins (cf. the os module in Python)
cwm_string.py - CWM String builtins (cf. the string module in Python)
llyn.py - This is the store & inference engine part, where most of the magic takes place
notation3.py - The Notation3 parser and serializer
RDFSink.py - An RDF Sink
sax2rdf.py - A SAX RDF Handler
thing.py - Interns the URIs and Strings for use elsewhere
http://www.w3.org/2000/10/swap/
Semantic Web Application Platform - SWAP
or, if you like, Semantic Web Area for Play... visiting RDF and all points west. working toward the SWELL
langauge, MIT-LCS's advanced development prototyping of tools and langauges for the Semantic Web.
PAMP
Home | Business Rules | Rule-Based Systems | Rule Engines | Under the Hood |