মানুষের সাধ্যেরে বাইরে কিছুই নেই, যদি তার নির্ভেজাল বোধগম্যতা থেকে থাকে। Web intelligence is the area of study and research of the application of artificial intelligence and information technology on the web in order to create the next generation of products, services and frameworks based on the internet. The 21st century is the age of the Internet and the World Wide Web. The Web revolutionizes the way we gather, process, and use information. At the same time, it also redefines the meanings and processes of business, commerce, marketing, finance, publishing, education, research, development, as well as other aspects of our daily life. The revolution is just beginning. Although individual Web based information systems are constantly being deployed, advanced issues and techniques for developing and for benefiting from Web intelligence still remain to be systematically studied. The article defines a new research field, namely Web Intelligence (WI) by giving a complete picture of WI related topics for systematic study on advanced Web technology and developing Web based intelligent information systems. Roughly speaking, WI exploits AI and advanced information technology on the Web and Internet. It is the key and the most urgent research field of IT for business intelligence At this very early stage, we are not sure if a formal definition of Web Intelligence is useful or desirable. Nevertheless, we suggest the following definition: “Web Intelligence (WI) exploits Artificial Intelligence (AI) and advanced Information Technology (IT) on the Web and Internet.” This definition has the following implications. The basis of WI is AI and IT. The “I” happens to be shared by both “AI” and “IT”, although with different mean-ings in them, and “W” defines the platform on which WI research is carried out. The goal of WI is the joint goals of AI and IT on the new platform of the Web. That is, WI applies AI and IT for the design and implementation of Intelligent Web Information Systems (IWIS). An IWIS should be able to perform functions normally associated with human intelligence, such as reasoning, learning, and self improvement. There perhaps might not be a standard and non-controversial definition of WI, as the case that there is no standard definition of AI. One may argued that our definition of WI focuses more on the software aspects of the Web. It is not our intention to exclude any research topic using the proposed definition. The term, Web Intelligence, should be considered as an umbrella or a label of a new branch of research centered on the Web. Our definition simply states the scopes and goals of WI. This allows us to include any theories and technologies that either fall in the scopes or aim at the same goals. To complement the formal definition, we try to make the picture clearer by listing topics to be covered by WI. WI will be an ever-changing research branch. It will be evolving with devel- opment of the Web as new media for information gathering, storage, processing, delivery and utilization. It is our expectation that WI will be evolved into an inseparable research branch of computer science. Although no one can predict the future in detail and without uncertainty, it is clear that WI would have huge impacts on the application of computers, which in turn will affect our everyday lives. A University Reporting Strategy, approved by the Decision Support and UI-Integrate Steering Teams, outlined additional implications of a Decision Support function. It confirmed that the preferred source for analytic reporting and also a source for some operational reporting would be the data warehouse. The data warehouse would be a major resource for reporting by colleges and departments in particular. “Artificial Intelligence” The original goal of the AI field was the construction of “thinking machines” – that is, computer systems with human-like general intelligence. Due to the difficulty of this task, for the last few decades the majority of AI researchers have focused on what has been called “narrow AI” – the production of AI systems displaying intelligence regarding specific, highly constrained tasks. In recent years, however, more and more researchers have recognized the necessity – and feasibility – of returning to the original goals of the field. Increasingly, there is a call for a transition back to confronting the more difficult issues of “human level intelligence” and more broadly artificial general intelligence (AI). The phrase Artificial Intelligence I, which was coined by John McCarthy three decades ago, evades a concise and formal definition to date. One representative definition is pivoted around the comparison of intelligence of computing machines with human beings . Another definition is concerned with the performance of machines which "historically have been judged to lie within the domain of intelligence" . None of these definitions or the like have been universally accepted, perhaps because of their references to the word "intelligence", which at present is an abstract and immeasurable quantity. A better definition of artificial intelligence, therefore, calls for formalization of the term "intelligence". Psychologist and Cognitive theorists are of the opinion that intelligence helps in identifying the right piece of knowledge at the appropriate instances of decision making. The phrase "artificial intelligence" thus c bane defined as the simulation of human intelligence on a machine, so make the machine efficient to identify and use the right place of "Knowledge" at a given step of solving a problem. A system capable of planning and executing the right task at the right time is generally called rational . Thus, AI alternatively may be stated as a subject dealing with computational models that can think and act rationally 1, 2, 3, 4. A common question then naturally arises: Does rational thinking and acting include all possible characteristics of an intelligent system? If so, how does it represent behavioral intelligence such as machine learning, perception and planning? A little thinking, however, reveals that a system that can reason well must be a successful planner, as planning in many circumstances is part of a reasoning process. Further, a system can act rationally only after acquiring adequate knowledge from the real world. So, perception that stands for building up of knowledge from real world information is a prerequisite feature for rational actions. One step further thinking envisages that a machine without learning capability cannot possess perception. The rational action of an agent (actor), thus, calls for possession of all the elementary characteristics of intelligence. Relating artificial intelligence with the computational models capable of thinking and acting rationally, therefore, has a pragmatic significance. What is World Wide Web? The World Wide Web (abbreviated as WWW or W3 and commonly known as the Web) is a system of interlinked hypertext documents accessed via the Internet. With a web browser, one can view web pages that may contain text, images, videos, and other multimedia and navigate between them via hyperlinks. Using concepts from earlier hypertext systems, British engineer and computer scientist Sir Tim Berners-Lee, now Director of the World Wide Web Consortium (W3C), wrote a proposal in March 1989 for what would eventually become the World Wide Web. At CERN in Geneva, Switzerland, Berners-Lee and Belgian computer scientist Robert Cailliau proposed in 1990 to use hypertext "... to link and access information of various kinds as a web of nodes in which the user can browse at will", and they publicly introduced the project in December. "The World-Wide Web was developed to be a pool of human knowledge, and human culture, which would allow collaborators in remote sites to share their ideas and all aspects of a common project." In the May 1970 issue of Popular Science magazine Arthur C. Clarke was reported to have predicted that satellites would one day "bring the accumulated knowledge of the world to your fingertips" using a console that would combine the functionality of the Xerox, telephone, television and a small computer, allowing data transfer and video conferencing around the globe. In March 1989, Tim Berners-Lee wrote a proposal that referenced ENQUIRE, a database and software project he had built in 1980, and described a more elaborate information management system. With help from Robert Cailliau, he published a more formal proposal (on November 12, 1990) to build a "Hypertext project" called "WorldWideWeb" (one word, also "W3") as a "web" of "hypertext documents" to be viewed by "browsers" using a client–server architecture. This proposal estimated that a read-only web would be developed within three months and that it would take six months to achieve "the creation of new links and new material by readers, [so that] authorship becomes universal" as well as "the automatic notification of a reader when new material of interest to him/her has become available." While the read-only goal was met, accessible authorship of web content took longer to mature, with the wiki concept, blogs, Web 2.0 and RSS/Atom. The proposal was modeled after the Dynatext SGML reader by Electronic Book Technology, a spin-off from the Institute for Research in Information and Scholarship at Brown University. The Dynatext system, licensed by CERN, was technically advanced and was a key player in the extension of SGML ISO 8879:1986 to Hypermedia within HyTime, but it was considered too expensive and had an inappropriate licensing policy for use in the general high energy physics community, namely a fee for each document and each document alteration. This NeXT Computer used by Tim Berners-Lee at CERN became the first web server The CERN datacenter in 2010 housing some www servers A NeXT Computer was used by Berners-Lee as the world's first web server and also to write the first web browser, WorldWideWeb, in 1990. By Christmas 1990, Berners-Lee had built all the tools necessary for a working Web: the first web browser (which was a web editor as well); the first web server; and the first web pages,[10] which described the project itself. On August 6, 1991, he posted a short summary of the World Wide Web project on the alt.hypertext newsgroup.[11] This date also marked the debut of the Web as a publicly available service on the Internet. The first photo on the web was uploaded by Berners-Lee in 1992, an image of the CERN house band What is Web Intelligence? At this very early stage, we are not sure if a formal definition of Web Intelligence is useful or desirable. Nevertheless, we suggest the following definition: “Web Intelligence (WI) exploits Artificial Intelligence (AI) and advanced Information Technology (IT) on the Web and Internet.” This definition has the following implications. The basis of WI is AI and IT. The “I” happens to be shared by both “AI” and “IT”, although with different mean-ings in them, and “W” defines the platform on which WI research is carried out. The goal of WI is the joint goals of AI and IT on the new platform of the Web. That is, WI applies AI and IT for the design and implementation of Intelligent Web Information Systems (IWIS). An IWIS should be able to perform functions normally associated with human intelligence, such as reasoning, learning, and self improvement. There perhaps might not be a standard and non-controversial definition of WI, as the case that there is no standard definition of AI. One may argued that our definition of WI focuses more on the software aspects of the Web. It is not our intention to exclude any research topic using the proposed definition. The term, Web Intelligence, should be considered as an umbrella or a label of a new branch of research centered on the Web. Our definition simply states the scopes and goals of WI. This allows us to include any theories and technologies that either fall in the scopes or aim at the same goals. To complement the formal definition, we try to make the picture clearer by listing topics to be covered by WI. WI will be an ever-changing research branch. It will be evolving with devel- opment of the Web as new media for information gathering, storage, processing, delivery and utilization. It is our expectation that WI will be evolved into an inseparable research branch of computer science. Although no one can predict the future in detail and without uncertainty, it is clear that WI would have huge impacts on the application of computers, which in turn will effect our everyday lives. Motivations and Justifications for WI The introduction of Web Intelligence (WI) can be motivated and justified from both academic and industrial perspectives. Two features of the Web make it a useful and unique platform for computer applications and research, the size and complexity. The Web contains a huge amount of interconnected Web documents known as Web pages. For example, the popular search engine Google claims that it can search 1,346,966,000 pages as of February 2001. The sheer size of the Web leads to difficulties in the storage, management, and efficient and effective retrieval of Web documents. The com- plexity of the Web, in terms of connectivity and diversity of Web documents, forces us to reconsider many existing information systems, as well as theories, methodologies and technologies underlying those systems. One has to deal with a heterogeneous collection of structured, unstructured, semi-structured, inter- related, and distributed Web documents consisting of texts, images and sounds, instead of homogeneous collection of structured and unrelated objects. The lat- ter is the subject of study of many conventional information systems, such as databases, information retrieval, and multi-media systems. To accommodate the needs of the Web, one needs to study issues on the design and implementation of the Web-based information systems by combining and extending results from existing intelligent information systems. Existing theories and technologies need to be modified or enhanced to deal with complexity of the Web. Although indi- vidual Web-based information systems are constantly being deployed, advanced issues and techniques for developing and for benefiting from the Web remain to be systematically studied. The challenges brought by the Web to computer scientists may justify the creation of the new sub-discipline, WI, for carrying out Web-related research. The Web increases the availability and accessibility of information to a much larger community than any other computer applications. The introduction of Personal Computers (PCs) brought the computational power to ordinary peo- ple. It is the Web that delivers more effectively information to everyone at finger tips. The Web, no doubt, offers a new means for sharing and transmitting in- formation unmatchable by other media. The revolution started by the Web is just beginning. New business opportunities, such as e-commerce, e-banking, and e-publication, will increase with the maturity of the Web. It can hardly over- emphasize more impacts of the Web on the business and industrial world. The creation of a new sub-discipline devoted toWeb related research and applications might has a significant value in the future. The needs for WI may be further illustrated by the current fast growing research and industrial activities centered on it. We searched the Web by using the keyword “Web Intelligence” through several search engines in February 2001. The results are summarized in Table 1. Perspectives of WI As a new branch of research, Web Intelligence exploits Artificial Intelligence (AI) and Information Technology (IT) on the Web. On the one hand, it may be viewed as applying results from these existing disciplines to a totally new domain. On the other hand,WI may also introduce new problems and challenges to the established disciplines. WI may also be viewed as an enhancement or an extension of AI and IT. It remains to be seen if WI would become a sub-area of AI and IT or a child of a successful marriage of AI and IT. However, no matter what happens, studies on WI can benefit a great deal from the results, experience, success and lessons of AI and IT. In their very popular textbook, Russell and Norvig [39] examined different definitions of artificial intelligence from eight other textbooks, in order to decide what is exactly AI. They observed that the definitions vary along the two dimen- sions. One dimension deals with the functionality and ability of an AI system, ranging from thought processes and reasoning ability of the systems to the be- havior of the systems. The other dimension deals with the designing philosophy of AI systems, ranging from intimating human problem solving to making ratio- nal decision. The combination of the two dimensions results in four categories of AI systems adopted from Russell and Norvig [39]: This classification provides a basis for the studies of various views and approaches for AI. It also clearly defines goals in the design of AI systems. According to Rus- sell and Norvig, they correspond to four approaches, the cognitive modeling approach (thinking humanly), the Turing test approach (acting humanly), the the laws of thought approach (thinking rationally), and the rational agent ap- proach (acting rationally). The two rows for separating AI systems in terms of thinking and acting may not be a most suitable classification. Action is normally the final result of a thinking process. One may argue that the class of systems acting humanly is a super set of the class of system thinking humanly. In contrast, the separation of human-centered approach and rationality-centered approach may have a signif- icant implication in the studies of AI. While earlier research on AI was focus more on human-centered approach, rationality-centered approach received more attention recently. The first column is centered around humans and leads to the treatment of AI as an empirical science involving hypothesis and experimental confirmation. A human-centered approach represents the descriptive view of AI. Under this view, a system is designed by intimating the human problem solving. This im- plies that a system should have the usual human capabilities such as knowledge representation, natural language processing, reasoning, planning and learning. The performance of an AI system is measured or evaluated through the Turing test. An system is said to be intelligent if it provides human level performance. Such a descriptive view dominates the majority of earlier studies of expert sys- tems, a special type of AI systems. The second column represents the prescriptive or normative view of AI. It deals with theoretical principles and laws that an AI system must follow, in- stead of intimating humans. That is, a rationalist approach deals with an ideal concept of intelligence, which may be independent of human problem solving. An AI system is rational if it does the right thing and makes the right deci- sion. The normative view of AI based on the well established disciplines such as mathematics, logic, and engineering. The descriptive and normative views also reflect the experimental and theo- retical aspects of AI research. The experimental study represents the descriptive view. It covers theories and models for the explanation of the workings of the human mind, and applications of AI to solving problems that normally require human intelligence. The theoretic study aims at the development of theories of rationality, and focuses on the foundations of AI. The two views are complemen- tary to each other. Studies in one direction may provide valuable insights into the other. Web Intelligence concerns the design and development of intelligent Web information systems. The previous framework for the study of AI can be im- mediately applied to that of Web Intelligence. More specifically, we can cluster research in WI into the prescriptive approach and the normative approach, and cluster Web information systems in terms of thinking and acting. Various re- search topics can be identified and grouped accordingly. Like AI, a foundation of WI can be established by drawing results from the following many related disciplines: Mathematics: Computation, logic, probability. Applied Mathematics and Statistics: Algorithms, non-classical logics, decision theory, information theory, mea- surement theory, utility theory, theories of uncertainty, approximate reason-ing. Psychology: cognitive psychology, cognitive science, human-machine interaction, user in- terface. Linguistics: Computational linguistics, natural language processing, machine translation. Information Technology: information science, databases, information retrieval systems, knowledge dis-covery and data mining, expert systems, knowledge-based systems, decision support systems, intelligent information agents. The topics under each entry are only intended as examples. They do not form an exhausted list. In the development of AI, we have witnessed the formulation of many of its new sub-branches, such as knowledge-based systems, artificial neural networks, genetic algorithms, and intelligent agents. Recently, non-classical AI topics have received much attentions under the name of computational intelligence. Compu-tational intelligence focuses on the computational aspect of intelligent systems. Topics Covered by WI In order to study advanced Web technology systematically, and develop advanced Web-based intelligent information systems, we list several major subtopics in each topic below. – Web Information System Environment and Foundations: • competitive dynamics of Web sites, • emerging Web technology, • network community formation and support, • new Web information description and query languages, • the semantic Web, • theories of small world Web, • Web information system development tools, • Web protocols. – Web Human-Media Engineering: • the art of Web page design, • multimedia information representation, • multimedia information processing, • visualization of Web information, • Web-based human computer interface. – Web Information Management: • data quality management, • information transformation, • Internet and Web-based data management, • multi-dimensional Web databases, • OLAP (on-line analytical processing), • multimedia information management, • new data models for the Web, • object oriented Web information management, • personalized information management, • semi-structured data management, • use and management of metadata, • Web knowledge management, • Web page automatic generation and updating, • Web security, integrity, privacy and trust. – Web Information Retrieval: • approximate retrieval, • conceptual information extraction, • image retrieval, • multi-linguistic information retrieval, • multimedia retrieval, • new retrieval models, • ontology-based information retrieval, • automatic Web content cataloguing and indexing. – Web Agents: • dynamics of information sources, • e-mail filtering, • e-mail semi-automatic reply, • global information collecting, • information filtering, • navigation guides, • recommender systems, • remembrance agents, • reputation mechanisms, • resource intermediary and coordination mechanisms, • Web-based cooperative problem solving. – Web Mining and Farming: • data mining and knowledge discovery, • hypertext analysis and transformation, • learning user profiles, • multimedia data mining, • regularities in Web surfing and Internet congestion, • text mining, • Web-based ontology engineering, • Web-based reverse engineering, • Web farming, • Web-log mining, • Web warehousing. – Web-Based Applications: • business intelligence, • computational societies and markets, • conversational systems, • customer relationship management (CRM), • direct marketing, • electronic commerce and electronic business, • electronic library, • information markets, • price dynamics and pricing algorithms, • measuring and analyzing Web merchandising, • Web-based decision support systems, • Web-based distributed information systems, • Web-based electronic data interchange (EDI), • Web-based learning systems, • Web marketing, • Web publishing. It should be pointed out that WI researches are not limited to the topics listed above. We expect that new topics will be added, and existing topic will be regrouped or redefined. In summary, we can observe two ways in whichWI research can be character-ized. The first one is by adding “Web” as a prefix to an existing topic. For exam-ple, from “digital library”, “information retrieval”, and “agents”, we can obtain “Web digital library”, “Web information retrieval”, and “Web agents”. On the other hand, we can add “on the Web” as a postfix. For example, we can obtain “digital library on the Web”, “information retrieval on the Web”, and “agent on the Web”. Our list of research topics is given by the prefix method. How-ever, we must avoid mistakes of seductive semantics as discussed by Bezdek [5].That is, “words or phrases which convey, by being interpreted in their ordinary (non-scientific) usage, a far more profound and substantial meaning about an algorithm or computational architecture than can be readily ascertained from the available theoretical and/or empirical evidence.” For a healthy development of Web Intelligence, we have to be more realistic about our goals and try to avoid over-selling of the subject. Trends and Challenges of WI Related Research and Development Web Intelligence presents excellent opportunities and challenges for the research and development of new generation Web-based information processing technol-ogy, as well as for exploiting business intelligence. With the rapid growth of the Web, research and development on WI have received much attention. We expect that more attention will be focused on WI in the coming years. Many specific applications and systems have been proposed and studied. Several dominant trends can be observed and are briefly reviewed in this section. E-commerce is one of the most important applications ofWI. The e-commerce activity that involves the end user is undergoing a significant revolution [42]. The ability to track users’ browsing behavior down to individual mouse clicks has brought the vendor and end customer closer than ever before. It is now pos-sible for a vendor to personalize his product message for individual customers at a massive scale. This is called targeted marketing or direct marketing [25]. Web mining and Web usage analysis play an important role in e-commerce for customer relationship management (CRM) and targeted marketing. Web min- ing is the use of data mining techniques to automatically discover and extract information from Web documents and services. Zhong et al. proposed a way of mining peculiar data and peculiarity rules that can be used for Web-log mining [52]. They also proposed ways for targeted marketing by mining classi-fication rules and market value functions [44, 49]. A challenge is to explore the connection between Web mining and the related agent paradigm such as Web farming that is the systematic refining of information resources on the Web for business intelligence. Text analysis, retrieval, and Web based digital library is another fruitful research area in WI. Topics in this area include semantics model of the Web, text ming, automatic construction of citation. Abiteboul et al. systematically investigated the data on the Web and the features of semistructured data. Zhong et al. studied text mining on the Web including automatic construction of ontology, e-mail filtering system, and Web-based e-business systems [47, 51]. Web based intelligent agents are aimed at improving a Web site or providing help to a user. Liu et al. worked on e-commerce agents [29]. Liu and Zhong worked on Web agents and KDDA (Knowledge Discovery and Data Mining Agents). We believe that Web agents will be a very important issue. It is therefore not surprising that we decide to hold the WI conference in parallel to the Intelligent Agents conference. In the next section, we provide a more detailed description of intelligent Web agents. The Web itself has been studied from two aspects, the structure of the Web as a graph and the semantics of the Web. Studies on Web structures investi-gate several structural properties of graphs arising from the Web, including the graph of hyperlinks, and the graph induced by connections between distributed search servants. The study of the Web as a graph is not only fascinating in its own right, but also yields valuable insight into Web algorithms for crawling, Goal of Web Intelligence There are at least three approaches to defining what an AI program should do, i.e. what constitutes "correct" behavior (Russel, S. and Norving, 1995). • An AI program should simulate the behavior of humans, e.g. match data from psychology, linguistics, and psychophysics. • An AI program should solve engineering problems automatically, e.g. guide a robot around a factory floor without accidents. • An AI program should achieve some platonic goal of "rationality." Inspiration then comes from prior work in philosophy and mathematical logic. The goal of WI is the joint goals of AI and IT on the new platform of the Web. That is WI applies AI and IT for the design and implementation of Intelligent Web Information Systems (IWIS). The Semantic Web The Web is returning to the traditional grounds of artificial intelligence in order to solve its own problems. It is a mystery to many why Berners-Lee and others believe the Web needs to transform into the Semantic Web. However, it may be necessitated by the growing problems of information retrieval and organization. The first incarnation of the Semantic Web was meant to address this problem by encouraging the creators of web-pages to provide some form of metadata (data about data) to their web-page, so simple facts like identity of the author of a web-page could be made accessible to machines. This approach hopes that people, instead of hiding the useful content of their web pages within text and pictures that was only easily readable by humans, would create machine-readable metadata to allow machines to access their information. To make assertions and inferences from metadata, inference engines would be used. The formal framework for this metadata, called the Resource Description Framework (RDF), was drafted by Hayes, one of the pioneers of artificial intelligence. RDF is a simple language for creating assertions about propositions(Hayes, 2004). The basic concept of RDF is that of the “triple”: any statement can be composed into a subject, a predicate, and the object. “The creator of the web-page is Henry Thompson” can be phrased as http://www.inf.ed.ac.uk/ ht dc:creator "Henry Thompson". The framework was extended to that of a full ontology language as described by description logic. This Web Ontology Language (OWL) is thus more expressive than RDF(Welty et al., 2004). The Semantic Web paradigm made one small but fundamental change to the architecture of the Web: a resource (that is, anything that can be identified by a URI) can be about anything. This means that URIs, that were formerly used to denote mostly web-pages and other data that has some form of byte-code on the Web can now be about anything from things whose physical existence is outside the Web to abstract concepts(Jacobs and Walsh, 2004). A URI can denote not just a web-page about the Eiffel Tower but the Eiffel Tower itself (including if there is no web-page at that location) or even a web-page about “the concept of loyalty.” This change is being reworked by Berners-Lee into the revised URI specification and an upcoming normative W3C document entitled “The Architecture of the Web”(Jacobs and Walsh, 2004). What was at first manually annotating webpages with proposition-like metadata now can become the full-scale problem of knowledge representation and ontology development, albeit with goals and tools that have been considerably modified since their inception during the origins of artificial intelligence. The question is has the Semantic Web learned anything from artificial intelligence? What is the Semantic Web? The Semantic Web is a web that is able to describe things in a way that computers can understand. • The Beatles was a popular band from Liverpool. • John Lennon was a member of the Beatles. • "Hey Jude" was recorded by the Beatles. Sentences like the ones above can be understood by people. But how can they be understood by computers? Statements are built with syntax rules. The syntax of a language defines the rules for building the language statements. But how can syntax become semantic? This is what the Semantic Web is all about. Describing things in a way that computers applications can understand it. The Semantic Web is not about links between web pages. The Semantic Web describes the relationships between things (like A is a part of B and Y is a member of Z) and the properties of things (like size, weight, age, and price) "If HTML and the Web made all the online documents look like one huge book, RDF, schema, and inference languages will make all the data in the world look like one huge database" Tim B Differences of the Semantic Web The first major difference between early artificial intelligence and the Semantic Web is that the Semantic Web is clearly not pursuing the original goal of AI as stated by the Dartmouth Proposal: “human-level intelligence”(McCarthy et al., 1955). The goal of the Semantic Web is more modest and in line with later artificial intelligence research, that of creating machines capable of exhibiting “intelligent” behavior. This goal is much harder to test, since if “intelligence” for machines is different than humans intelligence, there exists no similar Turing Test to detect merely machine-level intelligence (Turing, 460). However, there are reasons that the Semantic Web engineers have for their hope that their project might fulfill some of the goals of artificial intelligence, in particular the goal of creating usable ontologies of the real world. Semantic Reconciliation Semantic reconciliation is a process cycle constituted of four subsequent activities: scope, create, refine, and articulate. First, the community is scoped: user roles and affordances are appointed. Next, relevant facts are collected from documentation such as, e.g., natural language descriptions, (legacy) logical schemas, or other metadata and consequently decomposing this scope in elicitation contexts. The deliverable of scoping is an initial upper common ontology that organizes the key upper common patterns that are shared and accepted by the community. These upper common patterns define the current semantic interoperability requirements of the community. Once the community is scoped, all stakeholders syntactically refine and semantically articulate these upper common patterns. Pragmatic Perspective Unification During unification, a new proposal for the next version of the upper common ontology is produced, aligning relevant parts from the common and divergent stakeholder perspectives. Ultimately, if the semantic reconciliation results in a number of reusable language-neutral and context-independent patterns for constructing business semantics that are articulated with informal meaning descriptions, then the unification is worthwhile Semantic Application Semantic application is a process cycle constituted of two subsequent activities: select and commit where the scoped information systems are committed to selected consolidated business semantic patterns. This is done by first selecting relevant patterns from the pattern base. Next, the interpretation of this selection is semantically constrained. Finally, the various scoped sources and services are mapped on (read: committed to) this selection. The selection and axiomatization of this selection should approximate the intended business semantics. This can be verified by automatically verbalization into natural language, and validation of the unlocked data. Validation or deprecation of the commitments may result in another iteration of the semantic reconciliation cycle. Business semantics Business semantics are the information concepts that live in the organization, understandable for both business and IT. Business Semantics describe the business concepts as they are used and needed by the business instead of describing the information from a technical point-of-view. One important aspect of business semantics is that they are shared between many disparate data sources. Many data sources share the same semantics but have different syntax, or format to describe the same concepts. The way these business semantics are described is less important. Several approaches can be used such as UML, Object role modeling, XML, etc. This corresponds to Robert Meersman’s statement that semantics are “a (set of) mapping(s) from your representation language to agreed concepts (objects, relationships, behavior) in the real-world”. In the construction of information systems, semantics have always been crucial. In previous approaches, these semantics were left implicit (i.e. In the mind of reader or writer), hidden away in the implementation itself (e.g., in a database table or column code) or informally captured in textual documentation. According do Dave McComb, “The scale and scope of our systems and the amount of information we now have to deal with are straining that model.” Nowadays, information systems need to interact in a more open manner, and it becomes crucial to formally represent and apply the semantics these systems are concerned with. Application Business semantics management empowers all stakeholders in the organization by a consistent and aligned definition of the important information assets of the organization. The available business semantics can be leveraged in the so-called business/social layer of the organization. They can for example be easily coupled to a content management application to provide the business with a consistent business vocabulary or enable better navigation or classifi cation of information, leveraged by enterprise search engines, leveraged to make richer semantic-web-ready websites, etc. Business semantics can also be used to increase operational efficiency in the technical/operation layer of the organization. Business semantics provide a kind of abstracted, federated, and virtualized way to access and deliver data in a more efficient and aligned manner. In that respect, it is similar to EII with the added benefit that the shared models are not described in technical terms but in a way that is easily understood by the business. Collibra is the first organization to commercialize the idea behind business semantics management. Collibra's approach to Business Semantics Management is based on DOGMA, a research project at the Vrije Universiteit Brussel. Web Intelligence in business Terms like “Semantic Web,” “Web 3.0,” and the “Data Web” have been interchangeably used to describe the underlying vision behind recently approved technology standards created by the World Wide Web Consortium (W3C). However catchy, none of these buzz words supply any hint about this new technology’s ability to transform the foundation of enterprise software, empower radical new business capabilities, and throttle back IT spending in the notoriously expensive areas of data integration, master data management, and enterprise information management. The Semantic Web is a fundamentally unique way of specifying data and data relationships. It is more declarative, more expressive, and more consistently repeatable than Java/C++, Relational Database Management Systems (RDBMS), and XML documents. It builds upon and preserves the conventional data models' respective strengths. This Business Case will articulate why the Semantic Web will: Empower, directly and indirectly, new business capabilities Throttle back IT expenditures within medium and large businesses ...By transforming the foundation of enterprise software, and data integration in particular We encourage the reader to take the following actions: invest in training and skills development now prototype a solution and explore the new tools now probe your software vendors about their semantic technology roadmap now compel your enterprise architects to formulate a multi-year metadata strategy now By the end of this short paper, the reader should understand the overall superiority of Semantic Web technologies and be able to describe why it is very likely that they will be embedded in the fabric of nearly all data-intensive software within several years. In the mid of the 1990s, the vision of this new economic model became established in the United States of America and is now spreading around the world. The surprisingly strong economic performance of the U.S. economy in recent years has generated an extensive debate about the evolution towards a renewed economic model (Sahlman, 1999). The proponents of the new economy claim that the 1990s mark the beginning of a unique era of economic prosperity. Globalization and computerization are regarded as the compelling factors reshaping the traditional economy. At least, these factors of the new economy are viewed to be somewhat different from the factors of the earlier period of the industrial age. Although the ideas, contained in the new economic model, are getting significant attention, there is not yet a general agreement about what the new economy really means or how it should be defined and evaluated. Shapiro and Varian (1999) identify this as one of the keys why the discussion over the new economy is worldwide in full swing. Many industrial leaders and some economists summarize the effects of the combination of globalization and information and communication technology under the glossary term of the new economy. In view of linking computerization with the global market, the majority of them forecast inflation-free growth, which has never been experienced before. Above all, the development of the American economy of the last three years strengthens the belief of the new economy optimists. The U.S. unemployment rate declines and still the inflation rate remains low. The financial community believes strongly in another economic miracle, which is reflected in the booming stock exchanges (Wadhwani, 1999). Still, for the moment this is largely limited to the computer industry, but Internet, ecommerce and the necessary software make a transfer of the growth rates on other industries in the near future likely (Shapiro & Varian, 1999). How fast the new economic model will spread and be adapted around the globe and across industries remains unclear as well as on what scale productivity will increase. However, it becomes already now recognizable that structural modifications are not determined to wait for the official statistics (Mandel, 2000). Even the European economy, which has not the same economic prerequisites and conditions as the American economy, experiences a shift in the dominance of large-scale enterprises to a larger variety. Small and middle enterprises are successfully competing with large-scale enterprises. The traditional European industry makes no exception to be affected by the new economy conditions, as companies already started the transformation of their business towards e-commerce and services. What is collaboration software? Collaboration software is a general term for tools that people use to electronically communicate with each other. The software can be designed for tasks as simple as a text-based messaging; they can handle large numbers of items as with an office calendaring program; or they can be as complex as a computer aided engineering design system. New attention has been placed on collaboration software because of the potential of the Internet to provide new platforms for delivery and evolution of complex projects. Traditionally collaboration software models have been used internally by companies since companies have work teams that need to transfer and share of information between employees. Two popular models of collaboration software are peer-level systems and hierarchical systems. Microsoft Word would exemplify a peer-level system, where peers work together on a project. A hierarchical system is used for larger or more complex projects. Hierarchical systems use a central repository of data and changes. More complex collaboration software usually stores the documents or information that are being developed at a central computer. Team members typically access the centrally stored repository, only updating their portion. The collaboration software's primary role is to safeguard against two people changing the same section at the same time. Changes are automatically logged with an identification of the person and the time of the modification. Two users are precluded from simultaneously modifying the same set of data. Typically a person who wishes to change the information is required to "check-out" (state their intention to change a section) a component prior to changing it. The collaboration software then locks out other people from making changes until the person with the lock releases it. As they release their lock, they update the information in the central repository. When others want to change the same section, they must retrieve the latest version. The "check-out" procedure is the step that forces the update. While the information is under modifications, others are still permitted to view the unchanged version of the document. Collaboration software is not restricted to complex, large systems. Many common software programs provide peer level collaboration. Microsoft Word, for example, includes features like change tracking. This allows multiple people to work on the same document. The inherent limitation of tools such as this is that two people cannot work simultaneously on the same document. They need to agree, ahead of time, on who will be modifying the document at a particular time. When they have completed their changes, they need to forward the "source" or "master" document to the next person scheduled to modify it. No additional changes are made until the next person has completed their revisions. Obviously this negotiation becomes difficult for large documents under development by many people. Components of Semantic Web What is XML? • XML stands for EXtensible Markup Language • XML is a markup language much like HTML • XML was designed to carry data, not to display data • XML tags are not predefined. You must define your own tags • XML is designed to be self-descriptive • XML is a W3C Recommendation XML Documents Form a Tree Structure XML documents must contain a root element. This element is "the parent" of all other elements. The elements in an XML document form a document tree. The tree starts at the root and branches to the lowest level of the tree. All elements can have sub elements (child elements): ..... The terms parent, child, and sibling are used to describe the relationships between elements. Parent elements have children. Children on the same level are called siblings (brothers or sisters). All elements can have text content and attributes (just like in HTML). Example: The image above represents one book in the XML below: Everyday Italian Giada De Laurentiis 2005 30.00 Harry Potter J K. Rowling 2005 29.99 Learning XML Erik T. Ray 2003 39.95 The root element in the example is . All elements in the document are contained within . The element has 4 children: ,< author>, , . What is RDF? • RDF stands for Resource Description Framework • RDF is a framework for describing resources on the web • RDF is designed to be read and understood by computers • RDF is not designed for being displayed to people • RDF is written in XML • RDF is a part of the W3C's Semantic Web Activity • RDF is a W3C Recommendation RDF - Examples of Use • Describing properties for shopping items, such as price and availability • Describing time schedules for web events • Describing information about web pages (content, author, created and modified date) • Describing content and rating for web pictures • Describing content for search engines • Describing electronic libraries RDF Example Here are two records from a CD-list: Title Artist Country Company Price Year Empire Burlesque Bob Dylan USA Columbia 10.90 1985 Hide your heart Bonnie Tyler UK CBS Records 9.90 1988 Below is a few lines from an RDF document: Bob Dylan USA Columbia 10.90 1985 Bonnie Tyler UK CBS Records 9.90 1988 . . . The first line of the RDF document is the XML declaration. The XML declaration is followed by the root element of RDF documents: . The xmlns:rdf namespace, specifies that elements with the rdf prefix are from the namespace "http://www.w3.org/1999/02/22-rdf-syntax-ns#". The xmlns:cd namespace, specifies that elements with the cd prefix are from the namespace "http://www.recshop.fake/cd#". The element contains the description of the resource identified by the rdf:about attribute. The elements: , , , etc. are properties of the resource. OWL This document describes the OWL Web Ontology Language. OWL is intended to be used when the information contained in documents needs to be processed by applications, as opposed to situations where the content only needs to be presented to humans. OWL can be used to explicitly represent the meaning of terms in vocabularies and the relationships between those terms. This representation of terms and their interrelationships is called an ontology. OWL has more facilities for expressing meaning and semantics than XML, RDF, and RDF-S, and thus OWL goes beyond these languages in its ability to represent machine interpretable content on the Web. OWL is a revision of the DAML+OIL web ontology language incorporating lessons learned from the design and application of DAML+OIL. OWL Language is described by a set of documents, each fulfilling a different purpose, and catering to a different audience. The following provides a brief roadmap for navigating through this set of documents: • This OWL Overview gives a simple introduction to OWL by providing a language feature listing with very brief feature descriptions; • The OWL Guide demonstrates the use of the OWL language by providing an extended example. It also provides a glossary of the terminology used in these documents; • The OWL Reference gives a systematic and compact (but still informally stated) description of all the modelling primitives of OWL; • The OWL Semantics and Abstract Syntax document is the final and formally stated normative definition of the language; • The OWL Web Ontology Language Test Cases document contains a large set of test cases for the language; • The OWL Use Cases and Requirements document contains a set of use cases for a web ontology language and compiles a set of requirements for OWL. The suggested reading order of the first four documents is as given since they have been listed in increasing degree of technical content. The last two documents complete the documentation set. OWL Lite RDF Schema Features The following OWL Lite features related to RDF Schema are included. • Class: A class defines a group of individuals that belong together because they share some properties. For example, Deborah and Frank are both members of the class Person. Classes can be organized in a specialization hierarchy using subClassOf. There is a built-in most general class named Thing that is the class of all individuals and is a superclass of all OWL classes. There is also a built-in most specific class named Nothing that is the class that has no instances and a subclass of all OWL classes. • rdfs:subClassOf: Class hierarchies may be created by making one or more statements that a class is a subclass of another class. For example, the class Person could be stated to be a subclass of the class Mammal. From this a reasoner can deduce that if an individual is a Person, then it is also a Mammal. • rdfroperty: Properties can be used to state relationships between individuals or from individuals to data values. Examples of properties include hasChild, hasRelative, hasSibling, and hasAge. The first three can be used to relate an instance of a class Person to another instance of the class Person (and are thus occurences of ObjectProperty), and the last (hasAge) can be used to relate an instance of the class Person to an instance of the datatype Integer (and is thus an occurence of DatatypeProperty). Both owl:ObjectProperty and owlatatypeProperty are subclasses of the RDF class rdfroperty. • rdfs:subPropertyOf: Property hierarchies may be created by making one or more statements that a property is a subproperty of one or more other properties. For example, hasSibling may be stated to be a subproperty of hasRelative. From this a reasoner can deduce that if an individual is related to another by the hasSibling property, then it is also related to the other by the hasRelative property. • rdfs:domain: A domain of a property limits the individuals to which the property can be applied. If a property relates an individual to another individual, and the property has a class as one of its domains, then the individual must belong to the class. For example, the property hasChild may be stated to have the domain of Mammal. From this a reasoner can deduce that if Frank hasChild Anna, then Frank must be a Mammal. Note that rdfs:domain is called a global restriction since the restriction is stated on the property and not just on the property when it is associated with a particular class. See the discussion below on property restrictions for more information. • rdfs:range: The range of a property limits the individuals that the property may have as its value. If a property relates an individual to another individual, and the property has a class as its range, then the other individual must belong to the range class. For example, the property hasChild may be stated to have the range of Mammal. From this a reasoner can deduce that if Louise is related to Deborah by the hasChild property, (i.e., Deborah is the child of Louise), then Deborah is a Mammal. Range is also a global restriction as is domain above. Again, see the discussion below on local restrictions (e.g. AllValuesFrom) for more information. • Individual : Individuals are instances of classes, and properties may be used to relate one individual to another. For example, an individual named Deborah may be described as an instance of the class Person and the property hasEmployer may be used to relate the individual Deborah to the individual StanfordUniversity. Conclusion While it may be difficult to define what is exactly Web Intelligence (WI), one an easily argue for the need and necessity of creating such a subfield of study in omputer science. With the rapid growth of the Web, we foresee a fast growing nterest in Web Intelligence. oughly speaking, we define Web Intelligence as a field that “exploits Arti- icial Intelligence (AI) and advanced Information Technology (IT) on the Web and Internet.” It may be viewed as a marriage of artificial intelligence and information technology in the new setting of the Web. By examining the scope and istorical development of artificial intelligence, we discuss some fundamental is- ues of Web Intelligence in a similar manner. There is no doubt in our mind that esults from AI and IT will influence the development of WI. nstead of searching for a precise and non-controversial definition of WI, we ist topics that might be interested by a researcher working onWeb related issues. n particular, we identify some challenging issues of WI, including e-commerce, tudies of Web structures and Web semantics, Web information storage and etrieval, Web mining, and intelligent Web agents. e advocate for a new conference devoted to WI, namely, the Asia-Pacific onference on Web Intelligence. The conference will be an international forum or researchers and practitioners to present the state-of-the-art in the develop- ent of Web intelligence, to examine performance characteristics of various ap- roaches in Web-based intelligent information technology, and to cross-fertilize deaon the development of Web-based intelligent information systems among ifferent domains.
অনলাইনে ছড়িয়ে ছিটিয়ে থাকা কথা গুলোকেই সহজে জানবার সুবিধার জন্য একত্রিত করে আমাদের কথা । এখানে সংগৃহিত কথা গুলোর সত্ব (copyright) সম্পূর্ণভাবে সোর্স সাইটের লেখকের এবং আমাদের কথাতে প্রতিটা কথাতেই সোর্স সাইটের রেফারেন্স লিংক উধৃত আছে ।