Selasa, 01 Juli 2008

pendidikan

Welcome to my website wwwnuh.blogspot.com, I hope you enjoy your browsing at this interesting site. thankyou for visiting me.


Sesuatu yang baik, belum tentu benar. Sesuatu yang benar, belum tentu baik. Sesuatu yang bagus, belum tentu berharga. Sesuatu yang berharga/berguna, belum tentu bagus.

Pikiran yang terbuka dan mulut yang tertutup merupakan suatu kombinasi kebahagiaan.

Semakin banyak Anda berbicara tentang diri sendiri, semakin banyak pula kemungkinan untuk Anda berbohong.

Jika Anda tidak bisa menjadi orang pandai, jadilah orang yang baik.

Seorang teman sejati akan membuat Anda hangat dengan kehadirannya, mempercayai akan rahasianya dan mengingat Anda dalam doa-doanya.

Doa memberikan kekuatan pada orang yang lemah, membuat orang tidak percaya menjadi percaya dan memberikan keberanian pada orang yang ketakutan.

Jika kita berbuat baik, kebaikan pula yang akan kita terima kelak.

Senyum tidak hanya akan menampilkan wajah yang cerah, namun juga menghangatkan jiwa.

Cinta itu angkuh dan lembut. Lebih baik memiliki cinta daripada memiliki semua bintang di langit.

Yang penting bukan berapa lama kita hidup, tetapi bagaimana kita hidup.

Nasihat yang baik tidak pernah datang terlambat.

Iri hati yang ditunjukan kepada seseorang akan melukai diri sendiri.

Anda cuma bisa hidup sekali saja didunia ini, tetapi jika anda hidup dengan benar, sekali saja sudah cukup.

Kenangan indah masa lalu hanya untuk dikenang, bukan untuk diingat-ingat.

Rasa takut bukanlah untuk dinikmati, tetapi untuk dihadapi.

Orang bijaksana selalu melengkapi kehidupannya dengan banyak persahabatan.

Lidah anda yang menentukan siapa anda.

Diantara isi rumah tangga, anak-anaklah yang terbaik.

Cinta seringkali akan lari bila kita mencari, tetapi cinta jua seringkali dibiarkan pergi bila ia menghampiri.

Jika kejahatan di balas kejahatan, maka itu adalah dendam. Jika kebaikan dibalas kebaikan itu adalah perkara biasa. Jika kebaikan dibalas kejahatan, itu adalah zalim. Tapi jika kejahatan dibalas kebaikan, itu adalah mulia dan terpuji.

Jika Anda tidak memulai hari ini dengan senyuman, belum terlambat untuk mencobanya pada hari esok.

Buka mata kita lebar-lebar sebelum menikah, dan biarkan mata kita setengah terpejam sesudahnya

Persahabatan sejati layaknya kesehatan, nilainya baru kita sadari setelah kita kehilangannya

Seorang sahabat adalah yang dapat mendengarkan lagu di dalam hatimu dan akan menyanyikan kembali tatkala kau lupa akan bait-baitnya

Bertemanlah dengan orang yang suka membela kebenaran. Dialah hiasan dikala kita senang dan perisai diwaktu kita susah

Namun kita tidak akan pernah memiliki seorang teman, jika kita mengharapkan seseorang tanpa kesalahan. Karena semua manusia itu baik kalau kita bisa melihat kebaikannya dan menyenangkan kalau kita bisa melihat keunikannya tapi semua manusia itu akan buruk dan membosankan kalau kita tidak bisa melihat keduanya.

Tak seorang pun sempurna. Mereka yang mau belajar dari kesalahan adalah bijak. Menyedihkan melihat orang berkeras bahwa mereka benar meskipun terbukti salah

Bila Kita mengisi hati kita dengan penyesalan untuk masa lalu dan kekhawatiran untuk masa depan, kita tak memiliki hari ini untuk kita syukuri.

Perlukah merasa kecil dan malu dihina? Orang yang dihina itu sebenarnya memungut pahala cum-cuma tanpa perlu bersusah payah

Semulia-mulia manusia ialah siapa yang mempunyai adab, merendahkan diri ketika berkedudukan tinggi, memaafkan ketika berdaya membalas dan bersikap adil ketika kuat. (Khalifah Abdul MAlik bin Marwan)

Barang siapa yang selalu kekenyangan maka banyaklah dagingnya, dan siapa yang banyak dagingnya maka kuatlah nafsunya. Siapa yang kuat nafsunya maka banyaklah dosanya, siapa yang banyak dosanya maka keraslah hatinya dan siapa yang keras hatinya maka tenggelamlah dia dalam bencana dunia serta keindahannya

Sesungguhnya sebagian perkataan itu ada yang lebih keras dari batu, lebih tajam dari tusukan jarum, lebih pahit daripada jadam dan lebih panas daripada bara. Sesungguhnya hati adalah ladang, maka tanamlah ia dengan perkataan yang baik, karena jika tidak tumbuh semuanya (perkataan yang tidak baik) niscaya tumbuh sebagiannya

Tidak ada simpanan yang lebih berguna daripada ilmu.
Tidak ada sesuatu yang lebih beruntung daripada adab.
Tidak ada kawan yang lebih bagus daripada akal.
Tidak ada benda ghaib yang lebih dekat daripada maut.

Sekali tidak berhasil bukan berarti gagal selamanya

Kebahagiaan takkan pernah bisa dibeli dengan uang

Orang yang banyak ketawa itu kurang wibawanya.
Orang yang suka menghina orang lain, dia juga akan dihina.
Orang yang mencintai akhirat, dunia pasti menyertainya.
Barangsiapa menjaga kehormatan orang lain, pasti kehormatan dirinya akan terjaga

Sesungguhnya orang-orang yang berlaku adil di sisi Allah laksana berada di atas mimbar yang terbuat dari cahaya.
Mereka itu orang-orang yang berlaku adil dalam menetapkan hukum
baik kepada rakyat maupun kepada keluarga

Tiga sifat yang menyebabkan penyandangnya tidak tentram dalam hidupnya : iri, dengki, dan akhlak buruk

Hiduplah Sesuka hatimu,
Sesungguhnya kamu pasti mati.
Cintai siapa saja yang kamu senangi,
Sesungguhnya kamu pasti akan berpisah dengannya.
Lakukan apa saja yang kamu kehendaki,
Sesungguhnya kamu akan memperoleh balasannya.

Tidak ada kebaikan bagi pembicaraan kecuali dengan amalan.
Tidak ada kebaikan bagi harta kecuali dengan kedermawanan.
Tidak ada kebaikan bagi sahabat kecuali dengan kesetiaan.
Tidak ada kebaikan bagi shadaqah kecuali niat yang ikhlas.
Tidak ada kebaikan bagi kehidupan kecuali kesehatan dan keamanan

Banyak orang akan datang dan pergi dari kehidupanmu, tetapi hanya sahabat-sahabat sejati yang akan meninggalkan bekas di dalam hatimu.

Untuk menangani dirimu, gunakan kepalamu. Tetapi untuk menangani orang lain, gunakan hatimu.

Kemarahan hanyalah satu kata yang dekat dengan bahaya.

Pikiran yang besar membicarakan ide-ide;
Pikiran yang rata-rata membicarakan kejadian-kejadian;
Dan pikiran yang kerdil membicarakan orang-orang.

Allah memberikan kepada setiap burung makanan mereka, tetapi Ia tidak melemparkan makanan itu ke dalam sarang mereka.

Ia yang kehilangan uang, kehilangan banyak;
Ia yang kehilangan seorang teman, kehilangan lebih banyak;
Tetapi ia yang kehilangan keyakinan, kehilangan semuanya.

Belajarlah dari kesalahan orang lain. Engkau tidak dapat hidup cukup lama untuk mendapatkan semua itu dari dirimu sendiri.

Lidah praktis tidak berat sama sekali, tetapi hanya sedikit orang yang dapat memegangnya.

Smart people learn from their own mistakes. Smarter people learn from the mistakes of ot


Research in ELT

Population & Sampling

A population is the group of people or things that a researcher really wants to know something about. A population might be the members of a class, all the CJ students at USM, all the people in a city, all the people in a state, all the people in the US, or all the people in the world. Most of the time, CJ researchers want to generalize findings to everyone. We do not want to just say that GRE scores are related to graduate success in the USM CJ department, but that scores are related to success in CJ in general.

Rarely can we measure what we want to study in every single person in the population. This is a census, and only the federal government can afford to even try it. CJ researchers most always resort to using samples of the population they want to study. When you go to your doctor, he does not take out all of your blood to check your blood sugar. He takes only a sample. The idea is that the sample will be close enough to the rest of your blood to gather the needed information. A sample is a subset of a population. We use the sample to infer information about the population. A person or thing in a sample is called an element.

When we take a sample, we want it to be as close to the population as possible. The best way we have found to do this is probability sampling. Elements must have a known, equal and independent chance of being selected. So that each element of a population has an equal chance of being selected, we use random selection. The use of random selection lets us (1) control for bias, and (2) use probability theory in our analysis. Probability theory is the basis of statistical inference. It lets us make statements about the level of error that is likely in generalizing our findings in a sample to the population. There are several types of probability sample.

TYPES OF SAMPLING: USES, METHODS AND POTENTIAL PROBLEMS

A.Random sampling

1.Simple random sampling: a group of people are selected at random from a complete list or map of a given population.

Uses: where true random sampling is essential.

Method: One method is to take the list or map and give each unit a number, write the numbers on individual slips of paper, put them in a bag and mix the slips up thoroughly, and then draw out the number of slips required. Alternatively a random number table can be used. If no suitable list or map exists, it may be possible to use participatory methods to solve this problem.

Potential problems: may be very costly particularly where populations are geographically dispersed and/or individuals are difficult to trace because of for example marriage or migration. Even apparently complete lists may systematically exclude some relevant categories of respondent. In particular lists of registered entrepreneurs are likely to exclude women in enterprises. Conversely lists of female credit beneficiaries may not be a reliable basis for selection of credit users. Whether or not this matters will depend on the nature of the inquiry.

2.Systematic random sampling: a group of people are selected in a systematically random manner from a complete list of a given population.

Uses: where very large numbers are included in the target population and simple random sampling is difficult. Or where lists are already grouped into sections or classes.

Method and challenges: there are many possible systems e.g. by taking every tenth name for every fifth name.

Potential problems: Similar to simple random sampling. It is also crucial that the system selected does not bias the sample. For example selecting every tenth name from a list compiled of groups of ten members where the first name in each group is that of the President.

3.Stratified random sampling: when populations are divided into subgroups depending on particular characteristics.

Uses: when the nature of the issues to be investigated means that it is important to give respondents from particular subgroups an equal chance of representation and this would not happen through random sampling.

Method: the relevant characteristics to be used for stratification are identified on the basis of the questions to be asked e.g. membership or non-membership of an organisation, female or male members. A random list is then drawn up for each subgroup and respondents chosen randomly within each.

Potential problems: the identification of the characteristics for classification of respondents is crucial and may need to be refined during investigation.

Cluster sampling: where clusters are randomly selected and all individuals or households in particular clusters are interviewed.

Uses: when the target population is very large and/or geographically dispersed making simple random sampling extremely expensive and time-consuming.

Method and challenges: Clusters maybe geographical, for example villages or markets. They may also be for example microfinance groups or particular social categories within geographical locations e.g. all upper caste households.

Potential problems: It is important to ensure that important subgroups are not left out and also to consider any potential bias in analysis. For example if all the clusters thrown out by random selection are large villages, are the results likely to be different if some of the villages have been very small e.g. because of few facilities or different social structure.

4.Random walk: when the interviewer follows a random route.

Uses: where no list exists from which a random sample can be selected using the above methods.

Method: the interviewer follows specific random instructions e.g. take the first road right, interview at the second house on your left, continue down the road, interview tenth household on your right etc and interviews individuals as they are encountered.

Potential problems: Care must be taken to avoid bias e.g. by ignoring very small sidestreets.

5.Staged sampling: where samples are selected within samples e.g. random sampling or walk within a cluster.

B.Non-random sampling

1.Quota sampling: quotas for certain types of people or organisations are selected for interview.

Uses: when the nature of the issues to be investigated means that it is important to give respondents from particular subgroups a chance of being selected which is disproportionate to their numerical strength e.g. where it is important to include a significant number of respondents from minority populations, female entrepreneurs etc.

Method and challenges: The categories for which quotas are to be used and the quotas to be allocated are determined based on the issue to be addressed. Common criteria are age, gender, occupation and whether people live in project or non-project areas. The quotas are fixed depending on the types of issues to be investigated but respondents within each quota category are selected randomly.

Potential problems: The categories on which quotas are based are crucial and may need to be refined as the investigation progresses.

2. Purposive sampling: similar to quota samples but where respondents within each quota are selected to represent diversity.

Uses where it is particularly important to explore the range of different potential impacts eg ensuring that the quota for women includes a selection of single women, very old women, a literate woman and so on.

Method: selection of respondents is based on prior analysis and hypotheses of the different possible types of impact on different stakeholders.

Potential problems: it is important to be continually reflexive in response to information as it is obtained to ensure that diversity is properly understood and captured.

3.Chain sampling or snowballing: A first contact is selected and interviewed and then asked to suggest other interviewees and so on.

Uses: This method is useful for identifying minority groups or occupations within communities.

Method: it is important that all suggested interviewees are followed up in order to avoid bias. Questions may be cumulative to build up a complete picture of the particular population under study.

Potential problems: The chain may be biased because of the particular networks chosen. This can be overcome through probing investigation and/or combining with eg a random walk or selecting a number of such chains by another random method.

4.Genealogy-based sample: entire families and their relatives may be selected.

Uses: where it is important to include a cross-section of the community by age and sex and where costs do not permit use of a random sampling frame and/or no available map or list exists.

Method: The assistance of the first respondent is used to draw up a genealogy and then each member is followed up as in chain sampling.

Potential problems: In some contexts stratification between families or particular cultural characteristics of particular kin groups means that there are more similarities between family members than members of different families. Here considerable care needs to be taken to ensure that the selection of families is representative.

Matched samples: similar pairs of villages, projects or types of respondents are selected in order to compare them.

Uses: where it is important for control groups to be equivalent in size.

Method:

Potential problems: it is crucial to bear in mind possible ways in which the matched samples may differ and the problems involved in selection of any control group.

C. Repeat sampling methods

1.Repeat survey: where the entire survey processes are repeated, including the sampling.

Uses: where data is needed to capture seasonal variations or before and after situations and where complete random sampling is needed.

Method: A very similar questionnaire must be used each time, although some questions may be adapted e.g. to particular seasonal circumstances.

Potential problems: a large sample size is needed to make accurate comparisons over time. Also changes over time may become confused with random changes in the sample used.

2.Panel or cohort surveys: the same sample of people or organisations is contacted several times over a relatively long period.

Uses: where data is needed to capture seasonal variations or before and after situations but where it is important to follow through processes over time and/or complete random sampling is too expensive.

Method: Here questionnaires may be cumulative to build up case studies over time.

Potential problems: there may be problems of respondents' fatigue and drop out. Another danger is that they may change the way they act because they are in the study.

3.Rotating survey: a combination of panel and repeat survey methods where one fraction of the sample is changed each time the survey is repeated.

Uses: where data is needed to capture seasonal variations or before and after situations. This avoids the problems of respondent fatigue and lessens the problems of random variations. It also enables some processes to be followed through.

Method: Each interviewee is only interviewed a fixed number of times and then replaced. For the repeat interviewees questions maybe varied to capture processes of change.

Potential problems: although this method combines the advantages of the two other methods, it may also suffer from similar drawbacks.


Sampling Error

The principal aim of any sampling procedure is to obtain a sample which, subject to limitations of size, will reproduce the characteristics of the population being studied, especially those of immediate interest, as closely as possible. In practice two types of error will arise from any sampling procedure: first, sampling bias may arise in the way the selection is carried out; and second, random sampling error may arise in the sample obtained, due to chance differences between the members of the population included, or excluded, from the sample. Total sampling error in the sample issued for interviewing consists of these two taken together. The key difference between the two is that random sampling error decreases as the sample size is increased, whereas sampling bias is not eliminated or reduced in that way: it is a constant characteristic unless steps are taken to improve the quality of sample selection. An important source of sampling bias is a sampling frame (a list of the members of the total population of interest from which a sample for study is to be drawn) which does not in fact cover all of the intended population. For example, there may be systematic differences between people who do or do not enter themselves on the electoral register or own a telephone, so that lists of such persons are not completely representative of the adult population. Another source of bias is random sampling that is not in practice completely random, because the lists and records used as the sampling frame are not put together randomly, but presented in some systematic manner not known to the researcher using them.

After interviewing is completed, non-response bias may be discovered in survey results. The combination of sampling error and survey non-response bias together determine the representativeness of the survey data produced by the study.


Sunday, May 4, 2008

Encapsulated Freedom of Thinking

One of the major characteristics of philosophy is it encourages freedom of inquiry. The notion of freedom of inquiry itself is inherent in the notion of freedom of asking questions. In fact, philosophers say that the world would stop spinning if human beings stop asking questions. And in philosophy there are two fundamental questions that have made this branch of science keeps breathing: "why things the way they are, why not in different states".

In a seminar I attended recently, there was an interesting presentation on the impact of the utilization of IT in education on the role of teacher in the instructional process. The issue was addressed in the framework of educational philosophy. I think it was not a coincidence that the two questions were cited again in the presentation. Anyway, the main proposition made was that the utilisation of IT in education has somewhat reduced the teacher role into a mere "tour guide-like"position, which is not favourable.

Pertaining to this issue, as an educator, I am fully aware of the notion of teacher' conservativism that identifies a tendency among teachers to preserve the old way of doing things which, in many cases, has been blamed as the main cause of failures of some innovations in education. However, just like the philospher who likes to use their opponents' arguments to strike back, I would like to utilize the two questions to bring up an up-side down view of the isssue. Why do we still look at the nature of teaching and learning the way we do now, whreas in reality the world has changed and will keep changing? why don't we change our conceptualization of teaching and learning to one that goes with the reality of the everchanging world?

Saturday, May 3, 2008

On Becoming the first to know.

My daily routine of online search for information both for academic as well as other purposes recently warned me of the narrowing gap between a teacher and his students in term of who is better informed of the latest development in a given field. A decade ago when academic publications were still taking the form of printed materials teachers had always been granted a faster access to the latest issues in his field than his students. This was because they were financialy able to afford such an access.
Nowadays, since the introduction of the internet, teachers (espescially those working in the internet connected areas or workplaces) no longer enjoy such monopoly and authority. The internet has made the opportunity to access any kind of information equal to everybody.The interned with its search engines radically reduced the amount of time needed for tracking the searched information into seconds, while, in fact, merelly a decade ago one had to wait for a couple of weeks to get a sight of the newly published books or journals.
So, teachers and lecturers, when you come to your classes, don't think that you are always better informed of your course than your students. They might have spent hours browsing and collecting the latest information about the topic you are going to present. They might be waiting for you with a bunch of information that you have never come accross. Get ready to listen to them, race with them to be the first to know!!!

Monday, April 21, 2008

Surviving The Digital Interaction

Winston Churchill once said “ We shape our buildings, and afterwards our buildings shape us”. In sociological contexts we notice that human beings of any races develop and nurture their culture, but at the same time they are very much shaped by their own culture. Upon observing the widespread use of IT gadgets and digital means of communication such as hand-phone and the internet, we would agree that a new culture (of communication) is developing. In Malaysia, for example, an official report says that by mid December 2006, there were more than 17 millions registered hand phone numbers and the number of internet user are increasing rapidly day by day. However, if we agree with Churchill, and as being indicated by some communication experts, a worth pondering question needs to be answered: are we really controlling the technology or the technology is controlling us?

Of course, our answers would fall into one of, at least, three categories: we are controlling the technology, the technology are controlling us, or We and technology are controlling each other. Nevertheless, Here I would share with you some ideas of how this new mode of communication is influencing our interpersonal interaction and judgments, and how we could possibly handle it.

I am currently joining 5 email groups. The total members of these groups reach 500. They are mostly Indonesians currently staying in Australia, Malaysia, and Indonesia. The interesting part is that most of them never meet each other. They only interact on the line. Joining these groups is enough to keep me busy selecting which emails worth reading and which ones to be deleted (even without reading them first).

However, as an email group is actually a kind of virtual community nurturing a virtual culture, it is interesting to watch how these people project themselves through their emails and being identified by others members solely from what they write. So far, my observation finds that the interactions are much more chaotic than the face to-face-interaction. This might due to the fact that in an email group, once you post your idea, you are actually inviting the whole population of the group to read, evaluate and respond to you. You might expect tens of responses to a single issue. When you get negative feedbacks, the whole population witness it. Then you have to “save your face” by posting a rebuttal. This later grows into a long and unnecessary, often insulting, debate without a conclusion. How could this happen among people who never see each other? There might be some explanations, but I think this one must be there: the nature of the email interaction which is void of physical identification has put the members in a comfortable position to freely write what they want to write. It seems that “nothing to lose” type of interaction is operating here.

Another story, I once sent an SMS to a new friend whom I have only met once. I wrote: “Hi, how is your weekend? Still busy preparing for the presentation?”. This person replied: “Yup, really busy. Thanx”. Of course, the reply clearly informed me that this friend was busy and did not expect any further SMS exchange at that particular time. However, this text made me feel that I had been responded to inappropriately. It was not a finely-tuned reply. A picture of the personality of this new friend instantly appeared in my mind. I knew that my judgment might be subjective and, to some extent, premature, and biased. But I thought the story would have been merrier if it had been a face to face interaction, or if this friend has been aware of the issue and possessed the skills of handling SMS-mediated interactions.

In a face-to-face interaction our judgment of a person’s personality is primarily shaped by his or her tendency in verbal communication. This way, we normally categorize a person as talkative, quiet, friendly, extrovert, introvert, etc (even smart or stupid). Also, in face-to-face interactions our interpretation of a partner’s verbal language is made easy by the help of such extra-linguistic features as context, mimic, gesture, tone, and intonation. However, in the current mode of phone / hand phone or internet communication (SMS or email) these features are not or only partly available (unless you are using 3G). Thus, the written texts are taken as the sole reference for meaning making and interpretation. The hypothesis is that the judgment resulted from this method of meaning making and interpretation will accumulate into our perception of the person. The worst is that such a perception could possibly be wrong and misleading.

To conclude, the spread of digital interaction with all its positive and negative impacts is unavoidable. Just like what Churchill said, our life might be being shaped by this new form of interaction. But we can still make the most out of it while preserving the values of face-to-face interpersonal interaction. Hence, I think, the issue of “healthy” digital interaction etiquette and skills is important, particularly those pertaining written messaging (SMS and Email). Here are some ideas that come to my mind:

Ø As SMS and email are the most commonly used mode of digital communication (even they are slowly replacing face-to-face interaction), allocate your time to treat them fairly as your new form of genuine interaction. A five minute allocation of time could save you a good friend.
Ø Structure your text carefully. People identify you based on what you write.
Ø Do not forget the greetings. You like to be greeted, so do the others.
Ø Do not over-contract your language
Ø Be sufficient.



Monday, April 14, 2008

Intellectual Puberty

A succesful education is one that humanizes people. But, well educated people who behave like an allien is common in our daily life. These people don't want to understand others, they want to be understood. They force their world view that the world spins in a linear cause and effect line. They live in abstract submission dateline time-zone. So, you won't have even the smallest portion of their time if your subject refers solar or lunar calendar. They operate on the cosmic level of discourse, rarely have time for ordinary "How are you?" conversations. You have to squeeze your mind for the best grammar and vocabulary to communicate your ideas to them in order to avoid real time critique they would produce if you fail to do so. They seem to live in their own world or, may be, they have no life.

Make sure you aren not one of them. Stay down-to-earth

Saturday, April 12, 2008

Pragmatic Principle of Inquiry

If you come to a place to find something but you don't find it, then look for something else.

Friday, April 11, 2008

The Process of Learning....

Most people I met asked me the same question, " How to learn music quickly?" . I told them it took me years and I'm still learning.

Yes, there is a tendency to refer only to the out-put of learning, forgetting the process. That's the insanity of instant interests :)

Semantic

Semantics is the study of meaning in communication. The word derives from Greek σημαντικός (semantikos), "significant"[1], from σημαίνω (semaino), "to signify, to indicate" and that from σήμα (sema), "sign, mark, token"[2]. In linguistics it is the study of interpretation of signs as used by agents or communities within particular circumstances and contexts.[3] It has related meanings in several other fields.

Semanticists differ on what constitutes meaning in an expression. For example, in the sentence, "John loves a bagel", the word bagel may refer to the object itself, which is its literal meaning or denotation, but it may also refer to many other figurative associations, such as how it meets John's hunger, etc., which may be its connotation. Traditionally, the formal semantic view restricts semantics to its literal meaning, and relegates all figurative associations to pragmatics, but this distinction is increasingly difficult to defend[4]. The degree to which a theorist subscribes to the literal-figurative distinction decreases as one moves from the formal semantic, semiotic, pragmatic, to the cognitive semantic traditions.

The word semantic in its modern sense is considered to have first appeared in French as sémantique in Michel Bréal's 1897 book, Essai de sémantique'. In International Scientific Vocabulary semantics is also called semasiology. The discipline of Semantics is distinct from Alfred Korzybski's General Semantics, which is a system for looking at non-immediate, or abstract meanings.


Linguistics

In linguistics, semantics is the subfield that is devoted to the study of meaning, as inherent at the levels of words, phrases, sentences, and even larger units of discourse (referred to as texts). The basic area of study is the meaning of signs, and the study of relations between different linguistic units: homonymy, synonymy, antonymy, polysemy, paronyms, hypernymy, hyponymy, meronymy, metonymy, holonymy, exocentricity / endocentricity, linguistic compounds. A key concern is how meaning attaches to larger chunks of text, possibly as a result of the composition from smaller units of meaning. Traditionally, semantics has included the study of connotative sense and denotative reference, truth conditions, argument structure, thematic roles, discourse analysis, and the linkage of all of these to syntax.

Formal semanticists are concerned with the modeling of meaning in terms of the semantics of logic. Thus the sentence John loves a bagel above can be broken down into its constituents (signs), of which the unit loves may serve as both syntactic and semantic head.

In the late 1960s, Richard Montague proposed a system for defining semantic entries in the lexicon in terms of lambda calculus. Thus, the syntactic parse of the sentence above would now indicate loves as the head, and its entry in the lexicon would point to the arguments as the agent, John, and the object, bagel, with a special role for the article "a" (which Montague called a quantifier). This resulted in the sentence being associated with the logical predicate loves (John, bagel), thus linking semantics to categorial grammar models of syntax. The logical predicate thus obtained would be elaborated further, e.g. using truth theory models, which ultimately relate meanings to a set of Tarskiian universals, which may lie outside the logic. The notion of such meaning atoms or primitives are basic to the language of thought hypothesis from the 70s.

Despite its elegance, Montague grammar was limited by the context-dependent variability in word sense, and led to several attempts at incorporating context, such as :

  • situation semantics ('80s): Truth-values are incomplete, they get assigned based on context
  • generative lexicon ('90s): categories (types) are incomplete, and get assigned based on context

[edit] The dynamic turn in semantics

In the Chomskian tradition in linguistics there was no mechanism for the learning of semantic relations, and the nativist view considered all semantic notions as inborn. Thus, even novel concepts were proposed to have been dormant in some sense. This traditional view was also unable to address many issues such as metaphor or associative meanings, and semantic change, where meanings within a linguistic community change over time, and qualia or subjective experience. Another issue not addressed by the nativist model was how perceptual cues are combined in thought, e.g. in mental rotation[5].

This traditional view of semantics, as an innate finite meaning inherent in a lexical unit that can be composed to generate meanings for larger chunks of discourse, is now being fiercely debated in the emerging domain of cognitive linguistics[6] and also in the non-Fodorian camp in Philosophy of Language[7]. The challenge is motivated by

  • factors internal to language, such as the problem of resolving indexical or anaphora (e.g. this x, him, last week). In these situations "context" serves as the input, but the interpreted utterance also modifies the context, so it is also the output. Thus, the interpretation is necessarily dynamic and the meaning of sentences is viewed as context-change potentials instead of propositions.
  • factors external to language, i.e. language is not a set of labels stuck on things, but "a toolbox, the importance of whose elements lie in the way they function rather than their attachments to things."[7] This view reflects the position of the later Wittgenstein and his famous game example, and is related to the positions of Quine, Davidson, and others.

A concrete example of the latter phenomenon is semantic underspecification — meanings are not complete without some elements of context. To take an example of a single word, "red", its meaning in a phrase such as red book is similar to many other usages, and can be viewed as compositional[8]. However, the colours implied in phrases such as "red wine" (very dark), and "red hair" (coppery), or "red soil", or "red skin" are very different. Indeed, these colours by themselves would not be called "red" by native speakers. These instances are contrastive, so "red wine" is so called only in comparison with the other kind of wine (which also is not "white" for the same reasons). This view goes back to de Saussure:

Each of a set of synonyms like redouter ('to dread'), craindre ('to fear'), avoir peur ('to be afraid') has its particular value only because they stand in contrast with one another. No word has a value that can be identified independently of what else is in its vicinity.[9]

and may go back to earlier Indian views on language, especially the Nyaya view of words as indicators and not carriers of meaning[10].

An attempt to defend a system based on propositional meaning for semantic underspecification can be found in the Generative Lexicon model of James Pustejovsky, who extends contextual operations (based on type shifting) into the lexicon. Thus meanings are generated on the fly based on finite context.

[edit] Prototype theory

Another set of concepts related to fuzziness in semantics is based on prototypes. The work of Eleanor Rosch and George Lakoff in the 1970s led to a view that natural categories are not characterizable in terms of necessary and sufficient conditions, but are graded (fuzzy at their boundaries) and inconsistent as to the status of their constituent members.

Systems of categories are not objectively "out there" in the world but are rooted in people's experience. These categories evolve as learned concepts of the world — meaning is not an objective truth, but a subjective construct, learned from experience, and language arises out of the "grounding of our conceptual systems in shared embodiment and bodily experience"[4]. A corollary of this is that the conceptual categories (i.e. the lexicon) will not be identical for different cultures, or indeed, for every individual in the same culture. This leads to another debate (see the Whorf-Sapir hypothesis or Eskimo words for snow).


[edit] Computer science

In computer science, where it is considered as an application of mathematical logic, semantics reflects the meaning of programs or functions.

In this regard, semantics permits programs to be separated into their syntactical part (grammatical structure) and their semantic part (meaning). For instance, the following statements use different syntaxes (languages), but result in the same semantic:

  • x += y; (C, Java, etc.)
  • x := x + y; (Pascal)
  • Let x = x + y;
  • x = x + y (various BASIC languages)

Generally these operations would all perform an arithmetical addition of 'y' to 'x' and store the result in a variable 'x'.

Semantics for computer applications falls into three categories[11]:

  • Operational semantics: The meaning of a construct is specified by the computation it induces when it is executed on a machine. In particular, it is of interest how the effect of a computation is produced.
  • Denotational semantics: Meanings are modelled by mathematical objects that represent the effect of executing the constructs. Thus only the effect is of interest, not how it is obtained.
  • Axiomatic semantics: Specific properties of the effect of executing the constructs as expressed as assertions. Thus there may be aspects of the executions that are ignored.

The Semantic Web refers to the extension of the World Wide Web through the embedding of additional semantic metadata; s.a. Web Ontology Language (OWL).

[edit] Psychology

In psychology, semantic memory is memory for meaning, in other words, the aspect of memory that preserves only the gist, the general significance, of remembered experience, while episodic memory is memory for the ephemeral details, the individual features, or the unique particulars of experience. Word meaning is measured by the company they keep; the relationships among words themselves in a semantic network. In a network created by people analyzing their understanding of the word (such as Wordnet) the links and decomposition structures of the network are few in number and kind; and include "part of", "kind of", and similar links. In automated ontologies the links are computed vectors without explicit meaning. Various automated technologies are being developed to compute the meaning of words: latent semantic indexing and support vector machines as well as natural language processing, neural networks and predicate calculus techniques.

Semantics has been reported to drive the course of psychotherapeutic interventions. Language structure can determine the treatment approach to drug-abusing patients. [12]. While working in Europe for the US Information Agency, American psychiatrist, Dr. A. James Giannini reported semantic sifferences in medical approaches to addiction treatment.[13]. English speaking countries used the term "drug dependence" to describe a rather passive pathology in their patients. As a result the physician's role was more active.[14] Southern European countries such as Italy and Yugoslavia utilized the concept of "tossicomania" (i.e. toxic mania) to describe a more acive rather than passive role of the addict. As a result the treating physician's role shifted to that of a more passive guide than that of an active interventionist. [15].

[edit] References

  1. ^ Semantikos, Henry George Liddell, Robert Scott, A Greek-English Lexicon, at Perseus
  2. ^ Semaino, Henry George Liddell, Robert Scott, An Intermediate Greek-English Lexicon, at Perseus
  3. ^ Otto Neurath (Editor), Rudolf Carnap (Editor), Charles F. W. Morris (Editor) (1955). International Encyclopedia of Unified Science. Chicago, IL: University of Chicago Press.
  4. ^ a b George Lakoff and Mark Johnson (1999). Philosophy in the Flesh: The embodied mind and its challenge to Western thought. Chapter 1.. New York: Basic Books..
  5. ^ Barsalou, L. (1999). Perceptual Symbol Systems. Behavioral and Brain Sciences 22(4)
  6. ^ Ronald W. Langacker (1999). Grammar and Conceptualization. Berlin/New York: Mouton de Gruyer. ISBN ISBN 3110166038.
  7. ^ a b Jaroslav Peregrin (2003). Meaning: The Dynamic Turn. Current Research in the Semantics/Pragmatics Interface. London: Elsevier.
  8. ^ P. Gardenfors (2000). Conceptual Spaces. Cambridge, MA: MIT Press/Bradford Books.
  9. ^ Ferdinand de Saussure (1916). The Course of General Linguistics (Cours de linguistique générale).
  10. ^ Bimal Krishna Matilal (1990). The word and the world: India's contribution to the study of language. Oxford. The Nyaya and Mimamsa schools in Indian vyakarana tradition conducted a centuries-long debate on whether sentence meaning arises through composition on word meanings, which are primary; or whether word meanings are obtained through analysis of sentences where they appear. (Chapter 8).
  11. ^ Nielson, Hanne Riis & Nielson, Flemming (1995), Semantics with Applications , A Formal Introduction (1st ed.), Chicester, England: John Wiley & Sons, ISBN 0-471-92980-8 .
  12. ^ AJ Giannini. Mi ritroni in mente. Il Giornale di San Patrignano.6(32) 27-30,1990
  13. ^ AJ Giannini. Bo kahunte ha kpekot(In the claws of crack). Hoba Makeohja. 6(10)34-35,1990.
  14. ^ AJ Giannini. An approach to drug abuse, intoxication and withdrawal. American Family Physician. 61(9):2763-2769,2000.
  15. ^ AJ Giannini. L'abbuso di coca crack invasione da fermare. Il Giornale di Medico. 7(6):1-5,1990.

[edit] See also

[edit] Major philosophers and theorists

[edit] Linguistics and semiotics

[edit] Logic and mathematics

[edit] Computer science


Semantic is an infrastructure for parser based text analysis in Emacs. It is a lexer, parser-generator, and parser. It is written in Emacs Lisp and is customized to the way Emacs thinks about language files, and is optimized to use Emacs' parsing capabilities.

Semantic's goal is to provide an intermediate API for authors of language agnostic tools who want to deal with languages in a generic way. It also provides a simple way for Mode Authors who are expert's in their language, to provide a parser for those tool authors, without knowing anything about those tools.

Semantic's Parser Infrastructure:

Lexical Analyzer & Preprocessor
Converts a language into a token stream. Preprocessor support with lexical macro replacement (C/C++).
Parser
Converts a lexical token stream into a table of tags defined by the language.
Parser Generator with Bison
David Ponce has ported Bison to Emacs Lisp. New languages can be supported via the wisent parser.
Language Parsers
Parsers that have already been implemented:
Emacs Lisp, Java, C/C++, C#, Python, Erlang, awk, Makefile, Scheme, HTML, Texinfo, Javascript, dot.
Also: Semantic's own grammar format (.by or .wy)
Database
Persistent storage of parsed information. Speeds load time and provides standard way of cross referencing files. The database backend supports alternate parsers and file formats.
Incremental parser
Reparses minimal parts of a buffer as you edit.
Idle service manager
Reparses buffers in idle time, and also call other services.

Codinig Tools


call-tree.jpg
Complexity of semantic.el
Semantic supports a wide range of user interface tools.
Smart Completion
Completes symbols actually available in a given context. Some tools call this intellisense.
Idle Summary and Completion
Show function help, or offer up smart completions in idle time.
Speedbar Browser
Code for browsing tag lists with Speedbar.
Documentation Generator
Identifies inline documentation in source code, and can convert it to texinfo. It can also create inline documenation.
Stickyfunc mode
Locks the function declaration you are editing in the header line. (Emacs only).
Imenu
Create hierarchical imenu menus from parsed files.
Navigator
Senator is a navigator that permits simple navigation through the parsed language. It also includes token highlighting, and magic cut & paste.
Decoration Modes
Decorate buffers using more than regular expressions, such as overlines for functions, highlight header files Emacs can't find.
Smart Bookmarking
Emacs tracks what you edit, and allows quick navigation by name.
Highlight bad code
text which does not fit the language is underlined.
Charting
Draw a chart of some things semantic can quantify.
Analyzer
Examines code, and references those names against a database of pre-parsed files to provide detailed information.

Contribute to Semantic

Semantic's potential scope is quite large. If you would like to help Semantic push forward, consider participating in one of these fields:

  • Write a language agnostic tool that uses semantic.
  • Write a language definition for an unsupported language.
  • Maintain one of the language definitions already in Semantic, like scheme or C++.
  • Test Semantic and send in bugs.
  • Help with documentation
  • Keep Semantic compatible with the many versions of [X]Emacs

Join the mailing list

Join the mailing list to ask question, or help develop semantic.

Other Tools that use Semantic

Java Development Environment (JDE) is a full development environment for Java. It uses Semantic to provide useful Java specific features.

Emacs Code Browser (ECB) lets you browse your files' contents. Uses the Semantic package.

COGRE (pronounced cougar) is a COnnected GRaph Editor for Emacs. As it matures, it will use semantic to reverse-engineer sources into UML diagrams.

SRecode The Semantic Recoder is a template management system that generates code from Semantic tags.

Notes

Semantic works with Emacs 21, 22 & 23, XEmacs 20.x, and 21.x.

Semantic is always developed with Emacs from CVS, but I strive for compatiblity with as many versions of Emacs as I can. Please report bugs if it doesn't support your version of Emacs.

Downloading CEDET

CEDET tools including EIEIO, Semantic, Speedbar, EDE, and COGRE are now distributed together in a single file. This simplifies installation and version management.

While Some individual CEDET packages have active stable releases, the CEDET bundle is currently only available in Beta or Pre release. CEDET Betas are the only place to get the latest versions of individual tools.

PRE RELEASES

CEDET is currently driving toward a 1.0 release, and small changes are going onto each pre-release. Try out a pre-release and send in bug reports on the build process, or anything else.

Try out cedet-1.0pre4.tar.gz.

Try out cedet-1.0pre3.tar.gz.

If semantic kills font-lock, you need to get and use overlay-fix.el Thanks to David Ponce for creating this fix.


3.5 Semantics

Stephen G. Pulman
gif SRI International, Cambridge, UK
and University of Cambridge Computer Laboratory, Cambridge, UK

3.5.1 Basic Notions of Semantics

A perennial problem in semantics is the delineation of its subject matter. The term meaning can be used in a variety of ways, and only some of these correspond to the usual understanding of the scope of linguistic or computational semantics. We shall take the scope of semantics to be restricted to the literal interpretations of sentences in a context, ignoring phenomena like irony, metaphor, or conversational implicature [Gri75,Lev83].

A standard assumption in computationally oriented semantics is that knowledge of the meaning of a sentence can be equated with knowledge of its truth conditions: that is, knowledge of what the world would be like if the sentence were true. This is not the same as knowing whether a sentence is true, which is (usually) an empirical matter, but knowledge of truth conditions is a prerequisite for such verification to be possible. Meaning as truth conditions needs to be generalized somewhat for the case of imperatives or questions, but is a common ground among all contemporary theories, in one form or another, and has an extensive philosophical justification, e.g., [Dav69,Dav73].

A semantic description of a language is some finitely stated mechanism that allows us to say, for each sentence of the language, what its truth conditions are. Just as for grammatical description, a semantic theory will characterize complex and novel sentences on the basis of their constituents: their meanings, and the manner in which they are put together. The basic constituents will ultimately be the meanings of words and morphemes. The modes of combination of constituents are largely determined by the syntactic structure of the language. In general, to each syntactic rule combining some sequence of child constituents into a parent constituent, there will correspond some semantic operation combining the meanings of the children to produce the meaning of the parent.

A corollary of knowledge of the truth conditions of a sentence is knowledge of what inferences can be legitimately drawn from it. Valid inference is traditionally within the province of logic (as is truth) and mathematical logic has provided the basic tools for the development of semantic theories. One particular logical system, first order predicate calculus (FOPC), has played a special role in semantics (as it has in many areas of computer science and artificial intelligence). FOPC can be seen as a small model of how to develop a rigorous semantic treatment for a language, in this case an artificial one developed for the unambiguous expression of some aspects of mathematics. The set of sentences or well formed formulae of FOPC are specified by a grammar, and a rule of semantic interpretation is associated with each syntactic construct permitted by this grammar. The interpretations of constituents are given by associating them with set-theoretic constructions (their denotation) from a set of basic elements in some universe of discourse. Thus for any of the infinitely large set of FOPC sentences we can give a precise description of its truth conditions, with respect to that universe of discourse. Furthermore, we can give a precise account of the set of valid inferences to be drawn from some sentence or set of sentences, given these truth conditions, or (equivalently, in the case of FOPC) given a set of rules of inference for the logic.

3.5.2 Practical Applications of Semantics

Some natural language processing tasks (e.g., message routing, textual information retrieval, translation) can be carried out quite well using statistical or pattern matching techniques that do not involve semantics in the sense assumed above. However, performance on some of these tasks improves if semantic processing is involved. (Not enough progress has been made to see whether this is true for all of the tasks).

Some tasks, however, cannot be carried out at all without semantic processing of some form. One important example application is that of database query, of the type chosen for the Air Travel Information Service (ATIS) task [DAR89]. For example, if a user asks, ``Does every flight from London to San Francisco stop over in Reykyavik?'' then the system needs to be able to deal with some simple semantic facts. Relational databases do not store propositions of the form every X has property P and so a logical inference from the meaning of the sentence is required. In this case, every X has property P is equivalent to there is no X that does not have property P and a system that knows this will also therefore know that the answer to the question is no if a non-stopping flight is found and yes otherwise.

Any kind of generation of natural language output (e.g., summaries of financial data, traces of KBS system operations) usually requires semantic processing. Generation requires the construction of an appropriate meaning representation, and then the production of a sentence or sequence of sentences which express the same content in a way that is natural for a reader to comprehend, e.g., [MKS94]. To illustrate, if a database lists a 10 a.m.\ flight from London to Warsaw on the 1st--14th, and 16th--30th of November, then it is more helpful to answer the question What days does that flight go? by Every day except the 15th instead of a list of 30 days of the month. But to do this the system needs to know that the semantic representations of the two propositions are equivalent.

3.5.3 Development of Semantic Theory

It is instructive, though not historically accurate, to see the development of contemporary semantic theories as motivated by the deficiencies that are uncovered when one tries to take the FOPC example further as a model for how to do natural language semantics. For example, the technique of associating set theoretic denotations directly with syntactic units is clear and straightforward for the artificial FOPC example. But when a similar programme is attempted for a natural language like English, whose syntax is vastly more complicated, the statement of the interpretation clauses becomes in practice extremely baroque and unwieldy, especially so when sentences that are semantically but not syntactically ambiguous are considered [Coo83]. For this reason, in most semantic theories, and in all computer implementations, the interpretation of sentences is given indirectly. A syntactically disambiguated sentence is first translated into an expression of some artificial logical language, where this expression in its turn is given an interpretation by rules analogous to the interpretation rules of FOPC. This process factors out the two sources of complexity whose product makes direct interpretation cumbersome: reducing syntactic variation to a set of common semantic constructs; and building the appropriate set-theoretical objects to serve as interpretations.

The first large scale semantic description of this type was developed by [Mon73]. Montague made a further departure from the model provided by FOPC in using a more powerful logic (intensional logic) as an intermediate representation language. All later approaches to semantics follow Montague in using more powerful logical languages: while FOPC captures an important range of inferences (involving, among others, words like every, and some as in the example above), the range of valid inference patterns in natural languages is far wider. Some of the constructs that motivate the use of richer logics are sentences involving concepts like necessity or possibility and propositional attitude verbs like believe or know, as well as the inference patterns associated with other English quantifying expressions like most or more than half, which cannot be fully captured within FOPC [BC81].

For Montague, and others working in frameworks descended from that tradition (among others, Partee, e.g., [Par86], Krifka, e.g., [Kri89], and Groenendijk and Stokhof, e.g., [GS84,GS91a]) the intermediate logical language was merely a matter of convenience which could in principle always be dispensed with provided the principle of compositionality was observed. (I.e., The meaning of a sentence is a function of the meanings of its constituents, attributed to Frege, [Fre92]). For other approaches, (e.g., Discourse Representation Theory, [Kam81]) an intermediate level of representation is a necessary component of the theory, justified on psychological grounds, or in terms of the necessity for explicit reference to representations in order to capture the meanings of, for example, pronouns or other referentially dependent items, elliptical sentences or sentences ascribing mental states (beliefs, hopes, intentions). In the case of computational implementations, of course, the issue of the dispensability of representations does not arise: for practical purposes, some kind of meaning representation is a sine qua non for any kind of computing.

3.5.4 Discourse Representation Theory

Discourse Representation Theory (DRT) [Kam81,KR93], as the name implies, has taken the notion of an intermediate representation as an indispensable theoretical construct, and, as also implied, sees the main unit of description as being a discourse rather than sentences in isolation. One of the things that makes a sequence of sentences constitute a discourse is their connectivity with each other, as expressed through the use of pronouns and ellipsis or similar devices. This connectivity is mediated through the intermediate representation, however, and cannot be expressed without it. The kind of example that is typically used to illustrate this is the following:

A computer developed a fault.

A simplified first order representation of the meaning of this sentence might be:

exists(X,computer(X) and develop_a_fault(X))

There is a computer X and X developed a fault. This is logically equivalent to:

not(forall(X,not(computer(X) and develop_a_fault(X))))

It isn't the case that every computer didn't develop a fault. However, whereas the first sentence can be continued thus:

A computer developed a fault.
It was quickly repaired.

---its logically equivalent one cannot be:

It isn't the case that every computer didn't develop a fault.
It was quickly repaired.

Thus the form of the representation has linguistic consequences. DRT has developed an extensive formal description of a variety of phenomena such as this, while also paying careful attention to the logical and computational interpretation of the intermediate representations proposed. [KR93] contains detailed analyses of aspects of noun phrase reference, propositional attitudes, tense and aspect, and many other phenomena.

3.5.5 Dynamic Semantics

Dynamic semantics (e.g., [GS91a,GS91b]) takes the view that the standard truth-conditional view of sentence meaning deriving from the paradigm of FOPC does not do sufficient justice to the fact that uttering a sentence changes the context it was uttered in. Deriving inspiration in part from work on the semantics of programming languages, dynamic semantic theories have developed several variations on the idea that the meaning of a sentence is to be equated with the changes it makes to a context.

Update semantics (e.g., [Vel85,vEdV92]) approaches have been developed to model the effect of asserting a sequence of sentences in a particular context. In general, the order of such a sequence has its own significance. A sequence like:

Someone's at the door. Perhaps it's John. It's Mary!

is coherent, but not all permutations of it would be:

Someone's at the door. It's Mary. Perhaps it's John.

Recent strands of this work make connections with the artificial intelligence literature on truth maintenance and belief revision (e.g [G90]).

Dynamic predicate logic [GS91a,GS90] extends the interpretation clauses for FOPC (or richer logics) by allowing assignments of denotations to subexpressions to carry over from one sentence to its successors in a sequence. This means that dependencies that are difficult to capture in FOPC or other non-dynamic logics, such as that between someone and it in:

Someone's at the door. It's Mary.

can be correctly modeled, without sacrificing any of the other advantages that traditional logics offer.

3.5.6 Situation Semantics and Property Theory

One of the assumptions of most semantic theories descended from Montague is that information is total, in the sense that in every situation, a proposition is either true or it is not. This enables propositions to be identified with the set of situations (or possible worlds) in which they are true. This has many technical conveniences, but is descriptively incorrect, for it means that any proposition conjoined with a tautology (a logical truth) will remain the same proposition according to the technical definition. But this is clearly wrong: all cats are cats is a tautology, but The computer crashed, and The computer crashed and all cats are cats are clearly different propositions (reporting the first is not the same as reporting the second, for example).

Situation theory [BP83] has attempted to rework the whole logical foundation underlying the more traditional semantic theories in order to arrive at a satisfactory formulation of the notion of a partial state of the world or situation, and in turn, a more satisfactory notion of proposition. This reformulation has also attempted to generalize the logical underpinnings away from previously accepted restrictions (for example, restrictions prohibiting sets containing themselves, and other apparently paradoxical notions) in order to be able to explore the ability of language to refer to itself in ways that have previously resisted a coherent formal description [BE87].

Property theory [Tur88,Tur92] has also been concerned to rework the logical foundations presupposed by semantic theory, motivated by similar phenomena.

In general, it is fair to say that, with a few exceptions, the contribution of dynamic semantics, situation theory, and property theory has so far been less in the analysis of new semantic phenomena than in the exploration of more cognitively and computationally plausible ways of expressing insights originating within Montague-derived approaches. However, these new frameworks are now making it possible to address data that resisted any formal account by more traditional theories.

3.5.7 Implementations

Whereas there are beginning to be quite a number of systems displaying wide syntactic coverage, there are very few that are able to provide corresponding semantic coverage. Almost all current large scale implementations of systems with a semantic component are inspired to a greater or lesser extent by the work of Montague (e.g., [BBIS94,ASF95,Als92]). This reflects the fact that the majority of descriptive work by linguists is expressed within some form of this framework, and also the fact that its computational properties are better understood.

However, Montague's own work gave only a cursory treatment of a few context-dependent phenomena like pronouns, and none at all of phenomena like ellipsis. In real applications, such constructs are very common and all contemporary systems supplement the representations made available by the base logic with constructs for representing the meaning of these context-dependent constructions. It is computationally important to be able to carry out at least some types of processing directly with these underspecified representations: i.e., representations in which the contextual contribution to meaning has not yet been made explicit, in order to avoid a combinatorial explosion of potential ambiguities. One striking motivation for underspecification is the case of quantifying noun phrases, for these can give rise to a high degree of ambiguity if treated in Montague's fashion. For example, every keyboard is connected to a computer is interpretable as involving either a single computer or a possibly different one for each keyboard, in the absence of a context to determine which is the plausible reading: sentences do not need to be much more complex for a large number of possibilities to arise.

One of the most highly developed of the implemented approaches addressing these issues is the quasi-logical form developed in the Core Language Engine (CLE) [Als90,Als92] a representation which allows for meanings to be of varying degrees of independence of a context. This makes it possible for the same representation to be used in applications like translation, which can often be carried out without reference to context, as well as in database query, where the context-dependent elements must be resolved in order to know exactly which query to submit to the database. The ability to operate with underspecified representations of this type is essential for computational tractability, since the task of spelling out all of the possible alternative fully specified interpretations for a sentence and then selecting between them would be computationally intensive even if it were always possible in practice.

3.5.8 Future Directions

Currently, the most pressing needs for semantic theory are to find ways of achieving wider and more robust coverage of real data. This will involve progress in several directions: (i) Further exploration of the use of underspecified representations so that some level of semantic processing can be achieved even where complete meaning representations cannot be constructed (either because of lack of coverage or inability to carry out contextual resolution). (ii) Closer cooperation with work in lexicon construction. The tradition in semantics has been to assume that word meanings can by and large simply be plugged in to semantic structures. This is a convenient and largely correct assumption when dealing with structures like every X is P, but becomes less tenable as more complex phenomena are examined. However, the relevant semantic properties of individual words or groups of words are seldom to be found in conventional dictionaries and closer cooperation between semanticists and computationally aware lexicographers is required. (iii) More integration between sentence or utterance level semantics and theories of text or dialogue structure. Recent work in semantics has shifted emphasis away from the purely sentence-based approach, but the extent to which the interpretations of individual sentences can depend on dialogue or text settings, or on the goals of speakers, is much greater than had been suspected.

The Semantic Web: An Introduction

This document is designed as being a simple but comprehensive introductory publication for anybody trying to get into the Semantic Web: from beginners through to long time hackers. Recommended pre-reading: the Semantic Web in Breadth.

Table Of Contents

  1. What Is The Semantic Web?
  2. Simple Data Modelling: Schemata
  3. Ontologies, Inferences, and DAML
  4. The Power Of Semantic Web Languages
  5. Trust and Proof
  6. Ambient Information and SEM
  7. Evolution
  8. Does It Work? What Semantic Web Applications Are There?
  9. What Now? Further Reading

What Is The Semantic Web?

The Semantic Web is a mesh of information linked up in such a way as to be easily processable by machines, on a global scale. You can think of it as being an efficient way of representing data on the World Wide Web, or as a globally linked database.

The Semantic Web was thought up by Tim Berners-Lee, inventor of the WWW, URIs, HTTP, and HTML. There is a dedicated team of people at the World Wide Web consortium (W3C) working to improve, extend and standardize the system, and many languages, publications, tools and so on have already been developed. However, Semantic Web technologies are still very much in their infancies, and although the future of the project in general appears to be bright, there seems to be little consensus about the likely direction and characteristics of the early Semantic Web.

What's the rationale for such a system? Data that is geneally hidden away in HTML files is often useful in some contexts, but not in others. The problem with the majority of data on the Web that is in this form at the moment is that it is difficult to use on a large scale, because there is no global system for publishing data in such a way as it can be easily processed by anyone. For example, just think of information about local sports events, weather information, plane times, Major League Baseball statistics, and television guides... all of this information is presented by numerous sites, but all in HTML. The problem with that is that, is some contexts, it is difficult to use this data in the ways that one might want to do so.

So the Semantic Web can be seen as a huge engineering solution... but it is more than that. We will find that as it becomes easier to publish data in a repurposable form, so more people will want to pubish data, and there will be a knock-on or domino effect. We may find that a large number of Semantic Web applications can be used for a variety of different tasks, increasing the modularity of applications on the Web. But enough subjective reasoning... onto how this will be accomplished.

The Semantic Web is generally built on syntaxes which use URIs to represent data, usually in triples based structures: i.e. many triples of URI data that can be held in databases, or interchanged on the world Wide Web using a set of particular syntaxes developed especially for the task. These syntaxes are called "Resource Description Framework" syntaxes.

URI - Uniform Resource Identifier

A URI is simply a Web identifier: like the strings starting with "http:" or "ftp:" that you often find on the World Wide Web. Anyone can create a URI, and the ownership of them is clearly delegated, so they form an ideal base technology with which to build a global Web on top of. In fact, the World Wide Web is such a thing: anything that has a URI is considered to be "on the Web".

The syntax of URIs is carefully governed by the IETF, who published RFC 2396 as the general URI specification. The W3C maintains a list of URI schemes.

RDF - Resource Description Framework

A triple can simply be described as three URIs. A language which utilises three URIs in such a way is called RDF: the W3C have developed an XML serialization of RDF, the "Syntax" in the RDF Model and Syntax recommendation. RDF XML is considered to be the standard interchange format for RDF on the Semantic Web, although it is not the only format. For example, Notation3 (which we shall be going through later on in this article) is an excellent plain text alternative serialization.

Once information is in RDF form, it becomes easy to process it, since RDF is a generic format, which already has many parsers. XML RDF is quite a verbose specification, and it can take some getting used to (for example, to learn XML RDF properly, you need to understand a little about XML and namespaces beforehand...), but let's take a quick look at an example of XML RDF right now:-

   xmlns:dc="http://purl.org/dc/elements/1.1/"
xmlns:foaf="http://xmlns.com/0.1/foaf/" >


Sean B. Palmer

The Semantic Web: An Introduction


This piece of RDF basically says that this article has the title "The Semantic Web: An Introduction", and was written by someone whose name is "Sean B. Palmer". Here are the triples that this RDF produces:-

<>  _:x0 .
this "The Semantic Web: An Introduction" .
_:x0 "Sean B. Palmer" .

This format is actually a plain text serialization of RDF called "Notation3", which we shall be covering later on. Note that some people actually prefer using XML RDF to Notation3, but it is generally accepted that Notation3 is easier to use, and is of course convertable to XML RDF anyway.

Why RDF?

When people are confronted with XML RDF for the first time, they usually have two questions: "why use RDF rather than XML?", and "do we use XML Schema in conjunction with RDF?".

The answer to "why use RDF rather than XML?" is quite simple, and is twofold. Firstly, the benefit that one gets from drafting a language in RDF is that the information maps directly and unambiguously to a model, a model which is decentralized, and for which there are many generic parsers already available. This means that when you have an RDF application, you know which bits of data are the semantics of the application, and which bits are just syntactic fluff. And not only do you know that, everyone knows that, often implicitly without even reading a specification because RDF is so well known. The second part of the twofold answer is that we hope that RDF data will become a part of the Semantic Web, so the benefits of drafting your data in RDF now draws parallels with drafting your information in HTML in the early days of the Web.

The answer to "do we use XML Schema in conjunction with RDF?" is almost as brief. XML Schema is a language for restricting the syntax of XML applications. RDF already has a built in BNF that sets out how the language is to be used, so on the face of it the answer is a solid "no". However, using XML Schema in conjunction with RDF may be useful for creating datatypes and so on. Therefore the answer is "possibly", with a caveat that it is not really used to control the syntax of RDF. This is a common misunderstanding, perpetuated for too long now.

Screen Scraping, and Forms

For the Semantic Web to reach its full potential, many people need to start publishing data as RDF. Where is this information going to come from? A lot of it can be derived from many data publications that exist today, using a process called "screen scraping". Screen scraping is the act of literally getting the data from a source into a more manageable form (i.e. RDF) using whatever means come to hand. Two useful tools for screen scraping are XSLT (an XML transformations language), and RegExps (in Perl, Python, and so on).

However, screen scraping is often a tedious solution, so another way to approach it is to build proper RDF systems that take input from the user and then store it straight away in RDF. Data such as you may enter when signing up for a new mail account, buying some CDs online, or searching for a used car can all be stored as RDF and then used on the Semantic Web.

Notation3: RDF Made Easy

As you will have seen above, XML RDF can be rather difficult, but thankfully, there is are simpler teaching forms of RDF. One of these is called "Notation3", and was developed by Tim Berners-Lee. There is some documentation covering N3, including a specification, and an excellent Primer.

The design criteria behind Notation3 were fairly simple: design a simple easy to learn scribblable RDF format, that is easy to parse and build larger applications on top of. In Notation3, we can simply write out the URIs in a triple, delimiting them with a "<" and ">" symbols. For example, here's a simple triple consisting of three URI triples:-

   .

To use literal values, simply enclose the value in double quote marks, thus:-

  "Sean" .

If you don't want to give a URI for something that you are talking about, then there is a concept for that too (this is like saying "there is someone called... but without giving them a URI). You simply use an underscore and a colon, and then put a little label there:-

_:a1  "Sean" .

This may be read as "there is something that has the name Sean", or "a1 has the name Sean, for some value of a1". These things are called anonymous nodes, because they don't have a URI, and are sometimes referred to as existentially quantified nodes.

Note how in one of the examples above, we used the URI "http://xyz.org/#" three times, with only the last character changing each time? Notation3 gives us an excellent way to abbreviate this: by giving parts of URIs aliases, and using those aliases instead. This is how you declare an alias in Notation3:-

@prefix xyz:  .

Note that you must always declare an alias before you can use it. To use an alias, simply use the "xyz:" bit instead of the URI, and don't wrap the resulting term in the "<" and ">" delimiters. For example, instead of writing:-

   .

We can instead do:-

@prefix xyz:  .
:a :b :c .

Note that it doesn't matter what alias you use for a URI, as long as you use the same one throughout that document. You can also declare many aliases. The following bits of code are both equivalent to the piece of code above:-

@prefix blargh:  .
blargh:a blargh:b blargh:c .
@prefix blargh:  .
@prefix xyz: .
blargh:a xyz:b blargh:c .

However, it should be noted that we often use a few aliases pretty much standardly, so that when Semantic Web developers exchange code in plain text, they can just leave the prefixes out and people can guess what they're talking about. Note that code should not implement this feature. Here is an example of some "standard" aliases:-

@prefix : <#> .
@prefix rdf: .
@prefix rdfs: .
@prefix daml: .
@prefix log: .
@prefix dc: .
@prefix foaf: .

The empty alias ":" is often used to denote a new namespace that the author has not yet created a URI for (tut, tut). We use it in this introduction.

Notation3 does have many other little constructs including contexts, DAML lists, and alternative ways of representing anonymous nodes, but we need not concern ourselves with them here. Note that a syntax was devised to be an even simpler subset of Notation3, called N-Triples, but it doesn't use prefixes, and hence many of the examples in this article are not valid N-Triples, but are valid Notation3.

Dan Connolly once called Notation3 a "poor-man's RDF authoring tool" (source: RDF IG logs, 2001-06-01 03:55:12). Apparently, it is called Notation3 because "RDF M&S was the first, the RDF strawman was the second and this is the third" (source: RDF IG F2F 2001-02-26).

CWM: An XML RDF And Notation3 Inference Engine

Although we won't be discussing inference engines until later on in this article, we should note at this point that much RDF and Semantic Web processing (albeit often only experimental or demonstrative, at this stage) is done using a Python program called CWM or "Closed World Machine". More information can be found on the SWAP site.

At the moment, the best demonstration of its use can be how it can convert from XML RDF into Notation3 and vice versa. To convert "a.n3" into "a.rdf", simply use the following command line:-

python cwm.py a.n3 -rdf > a.rdf

CWM is a very powerful Semantic Web toolkit, and we shall be refering to it occasionally throught this article.

Simple Data Modelling: Schemata

The first "layer" of the Semantic Web above the syntax discussed above is the simple datatyping model. A "schema" (plural "schemata") is simply a document or piece of code that controls a set of terms in another document or piece of code. It's like a master checklist, or definition grammar.

RDF Schema

RDF Schema (also: RDF Schema Candidate Recommendation) was designed to be a simple datatyping model for RDF. Using RDF Schema, we can say that "Fido" is a type of "Dog", and that "Dog" is a sub class of animal. We can also create properties and classes, as well as doing some slightly more "advanced" stuff such as creating ranges and domains for properties.

All of the terms for RDF Schema start with "http://www.w3.org/2000/01/rdf-schema#", which you may have noticed is in our list of "standard" aliases above. The alias "rdfs:" is often used for RDF Schema, and we continue that tradition here.

The first three most important concepts that RDF and RDF Schema give us are the "Resource" (rdfs:Resource), the "Class" (rdfs:Class), and the "Property" (rdf:Property). These are all "classes", in that terms may belong to these classes. For example, all terms in RDF are types of resource. To declare that something is a "type" of something else, we just use the rdf:type property:-

rdfs:Resource rdf:type rdfs:Class .
rdfs:Class rdf:type rdfs:Class .
rdf:Property rdf:type rdfs:Class .
rdf:type rdf:type rdf:Property .

This simply says that "Resource is a type of Class, Class is a type of Class, Property is a type of Class, and type is a type of Property". These are all true statements.

It is quite easy to make up your own classes. For example, let's create a class called "Dog", which contains all of the dogs in the world:-

:Dog rdf:type rdfs:Class .

Now we can say that "Fido is a type of Dog":-

:Fido rdf:type :Dog .

We can also create properties quite easily by saying that a term is a type of rdf:Property, and then use those properties in our RDF:-

:name rdf:type rdf:Property .
:Fido :name "Fido" .

Why have we said that Fido's name is "Fido"? Because the term ":Fido" is a URI, and we could quite easily have chosen any URI for Fido, including ":Squiggle" or ":n508s0srh". We just happened to use the URI ":Fido" because it's easier to remember. However, we still have to tell machines that his name is Fido, because although people can guess that from the URI (even though they probably shouldn't), machines can't.

RDF Schema also has a few more properties that we can make use of: rdfs:subClassOf and rdfs:subPropertyOf. These allow us to say that one class or property is a sub class or sub property of another. For example, we might want to say that the class "Dog" is a sub class of the class "Animal". To do that, we simply say:-

:Dog rdfs:subClassOf :Animal .

Hence, when we say that Fido is a Dog, we are also saying that Fido is an Animal. We can also say that there are other sub classes of Animal:-

:Human rdfs:subClassOf :Animal .
:Duck rdfs:subClassOf :Animal .

And then create new instances of those classes:-

:Bob rdf:type :Human .
:Quakcy rdf:type :Duck .

And then we can invent another property, use that, and build up even more information...

:owns rdf:type rdf:Property .
:Bob :owns :Fido .
:Bob :owns :Quacky .
:Bob :name "Bob Fleming" .
:Quacky :name "Quakcy" .

And so on. You can see that RDF Schema is very simple, and yet allows one to build up knowledge bases of data in RDF very very quickly.

The next concepts which RDF Schema provides us, which are important to mention, are ranges and domains. Ranges and domains let us say what classes the subject and object of each property must belong to. For example, we might want to say that the property ":bookTitle" must always apply to a book, and have a literal value:-

:Book rdf:type rdfs:Class .
:bookTitle rdf:type rdf:Property .
:bookTitle rdfs:domain :Book .
:bookTitle rdfs:range rdfs:Literal .
:MyBook rdf:type :Book .
:MyBook :bookTitle "My Book" .

rdfs:domain always says what class the subject of a triple using that property belongs to, and rdfs:range always says what class the object of a triple using that property belongs to.

RDF Schema also contains a set of properties for annotating schemata, providing comments, labels, and the like. The two properties for doing this are rdfs:label and rdfs:comment, and an example of their use is:-

:bookTitle rdfs:label "bookTitle";
rdfs:comment "the title of a book" .

It is a good best practise to always label and comment your new properties, classes, and other terms.

Ontologies, Inferences, and DAML

DAML is a language created by DARPA as an ontology and inference langauge based upon RDF. DAML takes RDF Schema a step further, by giving us more in depth properties and classes. DAML allows one to be even more expressive than with RDF Schema, and brings us back on track with our Semantic Web discussion by providing some simple terms for creating inferences.

DAML+OIL

DAML provides us a method of saying things such as inverses, unambiguous properties, unique properties, lists, restrictions, cardinalities, pairwise disjoint lists, datatypes, and so on. We shall run through a couple of these here, but armed with the knowledge that you've already gotten from this introduction (assuming that you haven't skipped any of it!), it should be just as beneficial going through the DAML+OIL Walkthru.

One DAML construct that we shall run through is the daml:inverseOf property. Using this property, we can say that one property is the inverse of another. The rdfs:range and rdfs:domain values of daml:inverseOf is rdf:Property. Here is an example of daml:inverseOf being used:-

:hasName daml:inverseOf :isNameOf .
:Sean :hasName "Sean" .
"Sean" :isNameOf :Sean .

The second useful DAML construct that we shall go through is the daml:UnambiguousProperty class. Saying that a Property is a daml:UnambiguousProperty means that if the object of the property is the same, then the subjects are equivalent. For example:-

foaf:mbox rdf:type daml:UnambiguousProperty .
:x foaf:mbox .
:y foaf:mbox .

implies that:-

:x daml:equivalentTo :y .

Don't worry if this is getting all a bit too much... it's not essential to learning about the Semantic Web, but it is useful, since many Semantic Web applications now involve DAML. However, DAML is only one in a series of languages and so forth that are being used.

Inference

The principle of "inference" is quite a simple one: being able to derive new data from data that you already know. In a mathematical sense, querying is a form of inference (being able to infer some search results from a mass of data, for example). Inference is one of the driving principles of the Semantic Web, because it will allow us to create SW applications quite easily.

To demonstrate the power of inference, we can use some simple examples. Let's take the simple car example: we can say that:-

:MyCar de:macht "160KW" .

Now, to a German Semantic Web processor, the term ":macht" may well be built into it, and although an English processor may have an equivalent term built into it somewhere, it will not understand the code with the term in it that it doesn't understand. Here, then, is a piece of inference data that makes things clearer to the processor:-

de:macht daml:equivalentTo en:power .

We have used the DAML "equivalentTo" property to say that "macht" in the German system is equivalent to "power" in the English system. Now, using an inference engine, a Semantic Web client could successfully determine that:-

:MyCar en:power "160KW" .

This is only a very simple example of inference, but you can see immediately how easily the system could scale up. Merging databases simply becomes a matter of recording in RDF somewhere that "Person Name" in your database is equivalent to "Name" in my database, and then throwing all of the information together and getting a processor to think about it.

Indeed, this is already possible with Semantic Web tools that we have at our disposal today: CWM. Unfortunately, great levels of inference can only be provided using "First Order Predicate Logic" languages, and DAML is not a FOPL language entirely.

Logic

For the Semantic Web to become expressive enough to help us in a wide range of situations, it will become necessary to construct a powerful logical language for making inferences. There is a raging debate as to how and even whether this can be accomplished, with people pointing out that RDF lacks the power to quantify, and that the scope of quantification is not well defined. Predicate logic is better discussed in John Sowa's excellent Mathematical Background (Predicate Logic).

In particular, Pat Hayes is hard at work on a model for RDF that may ease the situation (2001-09), but there is still a great amount of uncertainty at this time. Of course, this does not stop us from using a Webized version of KIF or somesuch as a logical language on the Semantic Web.

At any rate, we already have a great range of tools with which to build the Semantic Web: assertions (i.e. "and"), and quoting (reification) in RDF, classes, properties, ranges and documentation in RDF Schema, disjoint classes, unambiguous and unique properties, datatypes, inverses, equivalencies, lists, and much more in DAML+OIL.

Note that Notation3 introduces a "context" construct, enabling one to group statements together and quantify over them using a specially designed logic vocabulary. Using this vocabulary, for example, one can express "or", using NANDs:-

{ { :Joe :loves :TheSimpsons } a log:Falsehood .
{ :Joe :is :Nuts } a log:Falsehood .
} a log:Falsehood .

Which can be read as "it is not true that Joe does not love The Simpsons and is not nuts". I resisted the temptation to make Joe a universally quantified variable.

Note that the above example does not serialize "properly" into XML RDF, because XML RDF does not have the context construct as denoted by the curly brackets in the example above. However a similar effect can be achieved using reification and containers.

The Power Of Semantic Web Languages

The main power of Semantic Web languages is that any one can create one, simply by publishing some RDF that describes a set of URIs, what they do, and how they should be used. We have already seen that RDF Schema and DAML are very powerful langauges for creating languages.

Because we use URIs for each of the terms in our languages, we can publish the languages easily without fear that they might get misinterpreted or stolen, and with the knowledge that anyone in the world that has a generic RDF processor can use them.

The Principle Of Least Power

The Semantic Web works on a principle of least power: the less rules, the better. This means that the Semantic Web is essentially very unconstraining in what it lets one say, and hence it follows that anyone can say anything about anything. When you look at what the Semantic Web is trying to do, it becomes very obvious why this level of power is necessary... if we started constraining people, they wouldn't be able to build a full range of applications, and the Semantic Web would therefore become useless to some people.

How Much Is Too Much?

However, it has been pointed out that this power will surely be too much... won't people be trying to process their shopping lists on an inference engine, and suddenly come up with a plan for world peace, or some strange and exciting new symphony?

The answer is (perhaps unfortunately!) no. Although the basic parts of the Semantic Web, RDF and the concepts behind it are very minimally constraining, applications that are built on top of the Semantic Web will be designed to perform specific tasks, and as such will be very well defined.

For example, take a simple server log program. One might want to record some server logs in RDF, and then build a program that can gather statistics from the logs that pertain to the site; how many visitors it had in a week, and so forth. That doesn't mean that it'll turn your floppy disc drive into a toaster or anything; it'll just process server logs. The power that you get from publishing your information in RDF is that once published in the public domain, it can be repurposed (used for other things) so much easier. Because RDF uses URIs, it is fully decentralized: you don't have to beg for some central authority to publish a language and all your data for you... you can do it yourself. It's Do It Yourself data management.

The Pedantic Web

Unfortunately, there is an air of academia and corporatate thinking lingering in the Semantic Web community, which has lead to the term "Pedantic Web" being coined, and a lot of mis/disinformation and unecessary hype being disseminated. Note that this very document was devised to help clear up some common misconceptions that people may have about the Semantic Web.

For example, almost all beginners to RDF go through a sort of "identity crisis" phase, where they confuse people with their names, and documents with their titles. For example, it is common to see statements such as:-

 dc:creator "Bob" .

However, Bob is just a literal string, so how can a literal string write a document? What the author really means is:-

 dc:creator _:b .
_:b foaf:name "Bob" .

i.e., that example.org was created by someone whose name is "Bob". Tips like these are being slowly collected, and some of them are being displayed in the SWTips guide, a collection of Semantic Web hints and tips maintained as a collaborative development project.

Education And Outreach

The move away from the "Pedantic Web", to some extent, is all part of a movement to bring the power of the Semantic Web to the people. This is a well documented need:-

[...] the idea that the above URIs reveal a schema that somehow fully describes this language and that it is so simple (only two {count 'em 2} possible "statements"), yet looks like the recipe for flying to Mars is a bit daunting. Its very simplicity enables it to evaluate and report on just about anything - from document through language via guidelines! It is a fundamental tool for the Semantic Web in that it gives "power to the people" who can say anything about anything.

- EARL for dummies, William Loughborough, May 2001

RDF Schema and DAML+OIL are generally languages that need to be learned, however, so what is being done to accomodate people who have neither the time nor patience to read up on these things, and yet want to create Semantic Web applications? Thankfully, many Semantic Web applications will be lower end appliactions, so you'll no more need to have a knowledge of RDF than Amaya requires one to have a knowledge of (X)HTML.

Trust and Proof

The next step in the archtecture of the Semantic Web is trust and proof. Very little is written about this layer, which is a shame since it will become very important in the future.

In stark reality, the simplest way to put it is: if one person says that x is blue, and another says that x is not blue, doesn't the whole Semantic Web fall apart?

The answer is of course not, because a) applications on the Semantic Web at the moment generally depend upon context, and b) because applications in the future will generally contain proof checking mechanisms, and digital signatures.

Context

Applications on the Semantic Web will depend on context generally to let people know whether or not they trust the data. If I get an RDF feed from a friend about some movies that he's seen, and how highly he rates them, I know that I trust that information. Moreover, I can then use that information and safely trust that it came from him, and then leave it down to my own judgement just to how much I trust his critiques of the films that he has reviewed.

Groups of people also operate on shared context. If one group is developing a Semantic Web depiction service, cataloguing who people are, what their names are, and where pictures of those people are, then my trust of that group is dependant upon how much I trust the people running it not to make spurious claims.

So context is a good thing because it lets us operate on local and medium scales intuitively, without having to rely on complex authentication and checking systems. However, what happens when there is a party that we know, but we don't know how to verify that a certain heap of RDF data came from them? That's where digital signatures come in.

Digital Signatures

Digital signatures are simply little bits of code that one can use to unambiguously verify that one wrote a certain document. Many people are probably familiar with the technology: it the same key based PGP-style thing that people use to encrypt and sign messages. We simply apply that technology to RDF.

For example, let's say I have some information in RDF that contains a link to a digital signature:-

this :signature  .
:Jane :loves :Mary .

To ascertain whether or not we trust that Jane really loves Mary, we can feed the RDF into a trust engine (an inference engine that has a little digital signature checker built into it), and get it to work out if we trust the source of the information.

Proof Languages

A proof language is simply a language that let's us prove whether or not a statement is true. An instance of a proof language will generally consist of a list of inference "items" that have been used to derive the information in question, and the trust information for each of those items that can then be checked.

For example, we may want to prove that Joe loves Mary. The way that we came across the information is that we found two documents on a trusted site, one of which said that ":Joe :loves :MJS", and another of which said that ":MJS daml:equivalentTo :Mary". We also got the checksums of the files in person from the maintainer of the site.

To check this information, we can list the checksums in a local file, and then set up some FOPL rules that say "if file 'a' contains the information Joe loves mary and has the checksum md5:0qrhf8q3hfh, then record SuccessA", "if file 'b' contains the information MJS is equivalent to Mary, and has the checksum md5:0892t925h, then record SuccessB", and "if SuccessA and SuccessB, then Joe loves Mary".

An example of this in Notation3 can be found in some of the author's proof example experiments, but here is the rules file:-

@prefix :  .
@prefix p: .
@prefix log: .
@prefix rdfs: .

p:ProvenTruth rdfs:subClassOf log:Truth .

# Proof

{ { p:checksum ;
log:resolvesTo [ log:includes { :Joe :loves :MJS } ] }
log:implies
{ :Step1 a p:Success } } a log:Truth .

{ { p:checksum ;
log:resolvesTo [ log:includes { :MJS = :Mary } ] }
log:implies
{ :Step2 a p:Success } } a log:Truth .

{ { :Step1 a p:Success . :Step2 a p:Success }
log:implies
{ { :Joe :loves :Mary } a p:ProvenTruth } } a log:Truth .

The file speaks for itself, and when processed using CWM, does indeed work, producing the intended output. CWM doesn't have the capability to automatically check file checksums or digital signatures, but it is only a matter of time before a proper Semantic Web trust engine is written.

Ambient Information and SEM

The scope of information was discussed a little, but let's take into consideration what it really means to have a "local" and a "global" system.

In general, there are small and large scale systems, and interactions between the two will most likely form a huge part of the transactions that occur on the Semantic Web. Let's define what we mean by large, medium, and small scale systems.

Large Scale

An example of a large scale system is two companies that are undergoing a merger needing to combine their databases. Another example would be search engines compiling results based upon a huge range of data. Large scale Semantic Web systems generally involve large databases, and heavy duty inference rules and processors are required to handle the databases.

Medium Scale

Medium scale Semantic Web systems attempt to make sense out of the larger scale Semantic Web systems, or are examples of small scale Semantic Web systems joined together. An example of the former is a company trying to partially understand two large scale invoice formats enough to use them together. An example of the latter is of two address book language groups trying to create a super-address book language.

Small Scale

Small scale Semantic Web systems are less widely discussed. By small scale Semantic Web systems, we mean languages that will be used primarily offline, or piles of data that will only be transferred with a limited scope, perhaps between friends, departments, or even two companies.

Sharing data on a local level is a very powerful example of how the Semantic Web can be useful in a myriad of situations. In the next section on evolution we shall be finding out how interactions between the different sized systems will form a key part of the Semantic Web.

SEM - SEmantic Memory

The concept of a SEmantic Memory was first proposed by Seth Russell, who suggested that personal database dumps of RDF that one has collected from the "rest" of the Semantic Web (a kind of Semantic Cloud) would be imperative for maintaining a coherant view of data. For example, a SEM would most likely be partitioned into data which is inherent to the whole Semantic Web (i.e., the schemata for the major languages such as XML RDF, RDF Schema, DAML+OIL, and so on), local data which is important for any Semantic Web applications that may be running (e.g. information about the logic namespace for CWM, which is currently built in), and data that the person has personally been using, is publishing, or that has been otherwise entered into the root context of the SEM.

The internal structure of a SEM will most likely go well beyond the usual triples structure of RDF, perhaps as far as quads or even pents. The extra fields are for contexts (an StID), and perhaps sequences. In other words, they are ways of grouping information within the SEM, for easy maintainence and update. For example, it should become simple to just delete any triple that was added into a certain context by removing all triples with that particular StID.

A lot of work on the Semantic Web has concentrated on making data stores (i.e. SEMs) interoperable, which is good, but that has lead to less work being conducted on what actually happens within the SEM itself, which is not good, because the representation of quads and pents in RDF is therefore up in the air. Obviously, statements can be modelled as they would be for reification:-

rdf:Statement rdfs:subClassOf :Pent .
_:s1 rdf:type :Pent .
_:s1 rdf:subject :x .
_:s1 rdf:predicate :y .
_:s1 rdf:object :z .
_:s1 :context :p .
_:s1 :seq "0" .

But clearly a dedicated pentuples format is always going to be more efficient, and aviod the perils of reification:-

:x :y :z :p "0" .

This language also needs a default context flag that indicates the root context of the document. The root context of a document is the space to which the (non-quoted) assertions are parsed, the conceptual information space in which all of the assertions are taken to be true. Any quoted information in the document (for example, using the Notation3 context syntax) would be in a different (possibly anonymous) context than the root context of the document.

TimBL appears, gaguing from the CWM source code, to be using "...#_formula" as the root context for the document, which (if true) is a bit of a nasty hack... what if one creates a property with the same URI? Maintaining interoperability at this level of the Semantic Web is an important thing for the Semantic Web developers to be investigating at this stage.

Evolution

A very important concept on the Semantic Web is that of evolution: going from one system into another. Two key parts of evolvability are partial understanding and transformability. We will find out next how these manifest themselves naturally when changing the scale of a system.

Partial Understanding: Large Scale to Medium Scale

The concept of partial understanding is a very important one on the Semantic Web, and can often be found in older documents that came out about the same time as the Semantic Web was first being theorized.

An example of partial understanding when moving a large scale system to a medium scale system is of a company trying to make sense out of two invoices, one from Company A and one from Company B. The knowledge that both of the companies use similar fields in their invoices is well known, so company trying to make sense out of the invoices can easily compile a master list of expenditures by simply scraping the data from the two invoice languages. Neither Company A nor Company B need to know that this is going on.

Indeed, TimBL included this example in his XML 2000 keynote:-

[...] what we'll end up doing in the future is converting things, so for example [...] in the Semantic Web we will have a relationship between two langauges so that if you get an invoice in a langauge you don't understand, and you have... some business software which can pay invoices... by following links across the Semantic Web, your machine will be able to automatically convert it from one language to another, and so process it.
- Tim Berners-Lee

Transformability: Small Scale to Medium Scale

An example of a small scale Semantic Web system joined together to make a medium sized Semantic Web system could be two groups that have published address book fomats wanting to make a larger and better address book format by merging the two current formats together. Anyone using one of the old address book formats could probably convert them into the new format, and hence there would be a greater sense of interoperability. That's generally what happens when one goes from a small scale Semantic Web system into a medium scale Semantic Web system, although this is often not without some disadvantages and incompatabilites. The Semantic Web takes the sting out of it by automating 99% of the process (it can convert field A into field B, but it can't fill in any new data for you... of course, new fields can always be left empty for a while).

Facilitating Evolvability

How do we document the evolution of languages? This is a very important and indeed urgent question, and one which TimBL summarized quite neatly:-

Where for example a library of congress schema talks of an "author", and a British Library talks of a "creator", a small bit of RDF would be able to say that for any person x and any resource y, if x is the (LoC) author of y, then x is the (BL) creator of y. This is the sort of rule which solves the evolvability problems. Where would a processor find it? [...]

- Semantic Web roadmap, Tim Berners-Lee

One possible answer is: third party databases. Very often, it is not practical to have (in TimBL's example) either the LoC or or BL record the fact that two of their fields are the same, so this information will have to be recorded by a reputable third party.

One such "third party" that was set up to investigate this is SWAG, the Semantic Web Agreement Group. Co-founded by Seth Russell, Sean B. Palmer, Aaron Swartz, and William Loughborough, the group aims to ensure interoperability on the Semantic Web. They set up what is possibly the first ever third party Semantic Web dictionary, the WebNS SWAG Dictionary.

Intertwingling: Difficult, But Important

Although the Semantic Web as a whole is still very much at a grassroots kind of level, people are starting to take notice; they're starting to publish information using RDF, and thereby making it fit for the Semantic Web.

However, not enough is being done to link information together... in other words, the "Semantic" part of the "Semantic Web" is coming along nicely, but where's the "Web"? People are not using other people's terms effectively; when they use other terms, they often do so because they're aimlessly trying to help, but just generating noice in the process. If you're going to use other people's data, try to find out what the advantage is in doing that beforehand. For example, just because you use the term "dc:title" in your RDF rather than a home brewed ":title", does that mean that suddenly a Dublin Core application is going to be able to "understand" your code? Of course not. What it does mean however is that if the "dc:title" property in your instance is being put to use in such a way that information may be need to repurposed from it in the near future, then you may gain some advantage because "dc:title" is such a commonly used term, you may be able to modify a current rules file, or whatever.

Another part of the problem may be due to a problem similar to the one that the early World Wide Web experienced: why bother publishing a Web site when there is no one else's site to link to or be linked to? Why bother publishing a Web site when so few people have browsers? Why bother writing a browser when there are so few Web sites? Some people have to make the leaps for it all to happen, and that's a slow process.

What can be done about the situation? Well, it may hopefully sort itself out. Another well-known principle that applies very well to Semantic Web applications is that there is no point in reinventing the wheel; viz., if someone has already invented a schema which contains a comprehensive and well understood and used set of terms that you also need to use in your application, then there is no point in trying to redo the work that they have done. At some points this may lead to a form of "schema war", but survival of the fittest should see to it that a core of the best schemata are put to the most use. This is probably what TimBL means when he says that terms will just "emerge" out of the Semantic Cloud, that when people keep using the term "zip", rather than just recording that my term "zip" is equivalent to your term "zip" which is equivalent to someone else's term "zip", we'll all just use the same URI, and hence interoperability will be vastly improved.

Does It Work? What Semantic Web Applications Are There?

I addressed this in a previous article: The Semantic Web: Taking Form, but it does bear repeating: the Semantic Web already works, and people are using it.

Semantic Web Applications

Unfortunately, the Semantic Web is dissimilar in many ways from the World Wide Web, including that you can't just point people to a Web site for them to realise how it's working, and what it is. However, there have been a number of small scale Semantic Web applications written up. One of the best ones is Dan Connolly's Arcs and Nodes diagrams experiment:-

One of the objectives of the advanced development component of the Semantic Web activity is to demonstrate how RDF and Semantic Web technologies can be applied to the W3C Process to increase efficiency, reliability, etc. In the early stages of developing an RDF model of the W3C process, the tools I was using to visualize the model while working on it started working well enough that I started applying them to all sorts of stuff.

Of course, this is a rather demonstration-oriented Semantic Web project, but it does illustrate the feasibility of applications being easily built using Semantic Web toolkits.

Another good example of the Semantic Web at work is Dan Brickley et al.'s RDFWeb. RDFWeb is a RDF database driven hypermedia blogspace, a site where all information is stored as RDF, and then that RDF used to render XHTML. Plans are underway to incorporate more advanced Semantic Web principles into the site.

What Can I Do To Help?

There are many ways in which one can contribute to creating the Semantic Web. Here's a few of them:-

  • Publish some globally useful data in RDF.
  • Write an inference engine in the language of your choice.
  • Spread the word: do some education and outreach.
  • Help in the developent of RDF Schema and/or DAML.
  • Contribute in representing state in RDF, a rather neglected field of research.
  • Apply your own development backgrounds to the Semantic Web, give us all a new angle to consider it from.
  • Instead of using some proprietary system for your next application, consider making it a Semantic Web project instead.

There are many other ways in which one can help as well: ask in the community for more details.

What Now? Further Reading

As of 2001-09, the amount of Semantic Web Education and Outreach materials can only really be described as "pitiful" (hence this introduction, for a start). Here's a short list of some of the good primers and materials currently available, in no particular order:-

For more information, all the latest news etc., Dave Beckett's Resource Description Framework (RDF) Resource Guide is absolutely brilliant.

Many Semantic Web and RDF developers hang out on the RDF IG IRC chatroom, on irc.openprojects.net, #rdfig.

Acknowledgements

Many thanks to Aaron Swartz, Dan Brickley, and Danny Ayers for reviewing this, and pointing out many important little details that I missed.


Filsafat Ilmu


Filsafat Ilmu

15 Tanggapan ke “Filsafat Ilmu”

  1. yayah tarsiah Berkata:

    faktor apa yang mempengaruhi penguasaan ilmu pengetahuan dan teknologi sekarang di kuasai orang-orang yahudi?klarena yang kita tahu berdasarkan sejarah ilmu pengetahuan dan teknologi dulu di kuasai oleh orang-orang muslim,contoh ibnu sina ahki kedokteran,dll.tetapi sekarang bergeser/berpindah orang yahudi yang menguasai.apa sebabnya?…

    Nama: yayah tarsiah
    Kelas: 1-c /PE-AP

  2. ryan Berkata:

    selamat pagi pak, saya ryan mahasiswa dari fakultas ilmu komunikasi universitas sahid jakarta. saya mohon bantuan bapak tentang ontologi ilmu komunikasi. saya berharap bapak bisa membantu saya untuk mengerjakan tugas tentang ontologi ilmu komunikasi.

    atas bantuannya, saya ucapkan banyak-banyak terima kasih.

  3. UC KENCLENG/ DEDI.H 1C PEAP Berkata:

    sesungguhnya ilmu itu banyak kelebihan kepala ilmu itu tawadhu, matanya adalah bebas dari dengki telinganya adalah faaham. lisannnya adalah kejujuran, hapalannya adalah pemeriksaan, hatinya adalah niat yan baik, akalnya adalah mengenal perkara-perkara yang wajib, tangannnya adalah kasih sayang dan sillaturahmim kakinya adalah mengunjungi ulama, tekadnya adalah kesehatan hikmahnya adalah wara atau hati-hati, tempatnya adalah selamat,penuntunya adalah kesejahteraan, kendaraannya adalah kesetiaan dan senjatanya adalah kelembutan bicara, pedangnya adalah keridhaan, busurnya adalah dialog, pasukannya adalah berdekatan dengan ulama hartanya adalah adab dan amal, tabungannya adalah menjauhi dosa, bekalnya adalah kebaikan, airnya adalah perpisahan, petunjuknya adalah hidayah, teman dekatnya adalah bersahabat dengan org shaleh yang menuntut ilmu

  4. ANDRIANTO/MAN.KARYAWAN Berkata:

    ILMU MRP SUATU PENGETAHUAN YANG KEBENARANNYA SUDAH DIBUKTIKAN, JADI KITA SEBAGAI MANUSIA APABILA INGIN BAHAGIA DAN SEJAHTERA KITA HARUS BERPIKIR YAITU DENGAN DIDASARI ILMU DUNIA DAN AKHIRAT MAKA KITA TIDAK PERLU SUSAH UNTUK MEMPEROLEH HIDUP SUKSES, KARENA APABILA KITA SUDAH MENEMUKAN KUNCINYA (ILMU) MAKA KITA AKAN SELALU WASPADA DAN HATI-HATI DALAM BERBUAT.

  5. ANDRIANTO/MAN.KARYAWAN Berkata:

    ILMU MRP SUATU PENGETAHUAN YANG KEBENARANNYA SUDAH DIBUKTIKAN, JADI KITA SEBAGAI MANUSIA APABILA INGIN BAHAGIA DAN SEJAHTERA KITA HARUS BERPIKIR YAITU DENGAN DIDASARI ILMU DUNIA DAN AKHIRAT MAKA KITA TIDAK PERLU SUSAH UNTUK MEMPEROLEH HIDUP SUKSES, KARENA APABILA KITA SUDAH MENEMUKAN KUNCINYA (ILMU) MAKA KITA AKAN SELALU WASPADA DAN HATI-HATI DALAM BERBUAT.
    SEBAGAIMANA FIRMAN ALLAH SWT. DALAM SURAT FATHIR :28 “SESUNGGUHNYA YANG TAKUT KEPADA ALLAH DIANTARA HAMBA-HAMBANYA HANYALAH ULAMA (ORANG BERILMU)”

  6. Asep Kuhro Berkata:

    1.*manfaat mampelajari Filsafat Ilmu:
    - Menambah Ilmu Pengetahuan
    - Manusia selalu ingin berpihak bagaimana suatu hal itu bisa tercapai dengan baik
    - menambah tingakat keimananyang kuat
    - bisa berfikir suatu teratur dan logis
    * manfaat mempelajari filsafatr ilmu
    - bisa mengkaji ilmu lebih mudah
    - bisa mengtahui bermacam-macam filsafat : Filsafat sosial, melahirkan ilmu sosial
    - ilmu filsafat sangat penting bagi kehidupan kita sehari-hari
    - dengan ilmu kita dapat mengetahui hal -hal baru, hal penting
    2. Ada
    berfikir bisa tanpa adanya unsur /saint filsafat
    berfikir secara ilmiah berfikir berdasarkan pengetahuan
    berfikir secara filsafat berdasarkan saint
    3. karena untuk melalui tujuan secara banyak dan lebih mendalam
    4. kalau berfikir secara normal berfikir biasa
    kalau berfilsafat berfikir secara normal tapi didasarkan pada pengetahuan dan berfikir untuk mencapai suatu tujuan
    5. berfikir cepat dan benar, karena kalau sesuatu atau tugas belum selesai hati dan perasaan saya tidak akan tenang, dan tugas ini ingin saya selesaikan dengan sebaik-baiknya sesuai dengan kemampuan saya

  7. Eman/3/akun Berkata:

    1. Dengan mempelajari filsafat dan filsafat ilmu, saya dapat mengetahui hal-hal (ilmu) baru yang bermanfaat bagi kehidupan yang sebelumnya tidak saya ketahui. Disamping itu juga saya bisa berfikir tentang sesuatu berdasarkan filsafat ilmu, dimana pemikiran kita secara logis dan rasional.
    2. - Berfikir biasa merupakan proses berpikir/ pengembangan pikiran tanpa didasarkan unsur ilmu dan filsafat, dimana berfikir disini hanya sederhana saja.
    - Berfikir ilmiah merupakan proses berfikir/ pengembangan pikiran yang tersusun secara sistematis yang berdasarkan pengetahuan-pengetahuan ilmiah,yang sudah ada/
    - Berfikir Filsafat merupakan proses berfikir/ pengembangan pikiran berdasarakan ilmu-ilmu pengetahuan yang lebih dalam lagi, sehingga menghasilkan suatu pemikiran yang benar-benar dapat dipertanggungjawabkan.
    3. Ilmu harus didasari dengan asumsi filsafat karena dengan filsafat, ilmu dapat lebih berkembang dan mendalam lagi.
    4. Dalam berfikir tidak berdasarkan ilmu/ filsafat, hanya berfikir biasa saja. Tetapi kalau berfilsafat itu didalamnya mencakup kegiatan berfikir yang berdasarkan ilmu pengetahuan.
    5. Pada saat membaca soal, saya pikir sangat mudah sekali soal yang diberikan, tetapi yang saya rasakan saat saya harus menjawab soal tersebut, terus terang saya kebingungan,jawaban yang bagaimana yang harus saya berikan. Saya tahu mengenai “berfikir, ilmu dan berfilsafat”, tetapi sulit mengungkapkannya.

    EMAN SULAEMAN
    03203510131
    Tk. 3/ AKUNTANSI (karyawan)

  8. defi/3/akun Berkata:

    1) Dengan mempelajari filsafat dan filsafat ilmu,maka kita sebagai manusia yang berfikir akan memperoleh pengetahuan yang lebih dikarenakan filsafat disini mengajak kita berfikir diatas berfikir, yaitu berfikir yang logis dan rasional dengan pembiktian-pembuktian yang ilmiah yang dapat diterima secara umum.
    2) Ada
    Berfikir biasa merupakan suatu proses pengembangan ide dan konsep yang tingkatannya masih terbatas atau sederhana karena prosesnya sanagtlah mudah dan hamp[ir semua orang dapat melakukannya.
    berfikir ilmiah yaitu berfikir disini telah disusun secara sistematis mengenai sebuah objek tertentu yang menghasilkan pengetahuan ilmiah
    Berfikir filsafat yaitu tahapan paling tinggi dari jenis berfikir, berfikir disini meliputi berfikir biasa, berfikir ilmiah dan berfilsafat.yaitu berfikir dengan logis dan rasional dengann segala pembuktian yang dapat di pertanggung jawabkan dan di terima oleh umum
    3)karena jika ilmu tdk di asumsikan pada filsafat maka ilmu itu tidak akan berkembang dan menghasilkan pengetahuan yang baru,dan jika berfikir di asumsikan kepada filsafat maka ilmu itu akan terus berkembang
    4) berfikir tidak mencakup berfilsafat, karena berfikir hanya sifatnya sederhana yang menghasilkan sesuatu yang biasa dan hampir semua orang bisa melakukannya,tetapi kalau filsafat mencakup kegiatan berfikir biasa, berfikir ilmiah, dan berfikir berfilsafat.dan tidak semua orang tidak bisa berfikir filsafat.
    5) yang saya fikirkan pada saat membaca soal diatas adalah kita sebagai manusia harus bisa menggunakan akal kita untukberfikir, namun kita jangan mau hanya berfikir biasa karena hampir semua orang bisa melakukannya. jadi kita seharusnya harus mempunyai keinginan untuk berfikir sistematis/ tersusun dengan didasarkan pada pembuktian yang rasional/ berfilsafat. disamping itu juga kita harus mendasarkan pikiran kita kepada ilmu agama, karena dikhawatirkan kalau kita berfikir berfilsafat tanpa didasarkan pada ilmu agama maka akan mencelakakan diri kita sendiri.fungsi ilmu agama disini untuk mengajak kita berfikir sesuai dengan ajaran yang benar. jika kita berfikir berfilsafat, maka segala sesuatu didasarkan pada pembuktian-pembuktian yang ada, sedangkan hidup ini kita sebagai umat Islam ada sesuatu yang harus kita yakini yang tidak dapatdibuktikan atau didasarkan pada filsafat, tetapi kita harus percaya, contohnya peristiwa Isra Mi’raj.

    Defi Gusfari
    03203510125
    Tk.3/ akuntansi (karyawan)

  9. Dr. Singkop Boas Boangmanalu Berkata:

    Jika anda tertarik dengan filsafat,saya mengundang anda sebagai rekan diskusi saya….(Dr. Boas Boangmanalu, Dosen Pascasajana Dept. Filsafat)

  10. Asep Kuhro Berkata:

    1. Pengalaman yang berkaitan dengan manfaat ilmu yang saya rasakan :
    Misalkan dalam pekerjaan saya sangat berhubungan sekali dengan ilmu, dengan ilmu kita dapat mengerti apa yang harus dikerjakan, dengan ilmu kita mengerti permasalahan-permasalahan yang ada dalam pekerjaan kita, dengan ilmu kita akan mudah untuk memecahkan masalah-masalah dalam pekerjaan ataupun kehidupan sehari-hari.
    2. Benar
    karena filsafat merupakan pengetahuan yang kita ketahui juga filsafat bisa merupakan sesuatu hal keimanan kita, juga dengan filsafat kita bisa berfikir dengan benar dan terarah, juga menambah ilmu dan wawasan yang belum kita ketahui.
    3.keprihatinan tersebut tidak lepas dari idividual itu sendiri mau tidak menggali kehidupan sekarang yang serba eksploitatif ini menggunakan ilmunya, dengan cara banyak belajar mengenai kehidupan sekarang.
    4. - ilmu bebas nilai
    ilmu yang tidak terbatas (konvensional)aturannya tidak tertulis.
    -. ilmu tidak bebas nilai
    ilmu yang dipelajari secara formal melalui pendidikan yang aksesnya dapat ditempuh dalam kehidupan sehari-hari.
    5. yang saya fikirkan ilmu kehidupan sekarang cenderung berfikir instan cepat selesai dan mudah dengan hasil yang baik.

  11. DADAN AHMAD GANDARA Berkata:

    Yth. Bapak Uhar Suharsaputra

    Pak, saya Dadan Ahmad Gandara
    Kelas Karyawan Jurusan Manajemen Fakultas Ekonomi
    Mau menanyakan nilai saya untuk mata kuliah Filsafat Ilmu
    nilai saya tidak ada di daftar nilai, saya ikut UAS susulan
    dan soal-soal di web site Bapak sudah saya jawab.
    Mohon kepada Bapak untuk dimaklumi.
    Sekian dan terima kasih.

    Dadan Ahmad Gandara

  12. dani Berkata:

    pak saya maba dari jurusan ilmu komunikasi , yang saya mau pertayakan adalah
    1 apakah ciri-ciri ilmu pengtahuan?
    2 apakah ciri-ciri ilmu sosial?
    3 mengapa ilmu komunikasi dikatakan sebagai ilmu pengetahuan?

    mohon segera dibalas untuk kelancaran tugas saya yang sudah mendekati deadline.sekiranya bapak sudi memaklumi.mohon maaf yang sebesar-besarnya.sekian dan terima kasih.

  13. uhar Berkata:

    nilai ada terlewat masuk nilainya A

  14. indra Berkata:

    Pak salam sejahtera
    melalui kesempatan ini, saya ingin menanyakan tentang pengertian filsafat ilmu dan sejarah perkembangan ilmu filsafat
    sebelum dan sesudahnya saya ucapkan banyak limpah terima kasih

BAB 1

ILMU, FILSAFAT DAN TEOLOGI

“Aku datang - entah dari mana,

aku ini - entah siapa,

aku pergi - entah kemana,

aku akan mati - entah kapan,

aku heran bahwa aku gembira”.

(Martinus dari Biberach,

tokoh abad pertengahan).

1. Manusia bertanya

Menghadapi seluruh kenyataan dalam hidupnya, manusia kagum atas apa yang dilihatnya, manusia ragu-ragu apakah ia tidak ditipu oleh panca-inderanya, dan mulai menyadari keterbatasannya. Dalam situasi itu banyak yang berpaling kepada agama:

“Manusia mengharapkan dari berbagai agama jawaban terhadap rahasia yang tersembunyi sekitar keadaan hidup manusia. Sama seperti dulu, sekarang pun rahasia tersebut menggelisahkan hati manusia secara mendalam: apa makna dan tujuan hidup kita, apa itu kebaikan apa itu dosa, apa asal mula dan apa tujuan derita, mana kiranya jalan untuk mencapai kebahagiaan sejati, apa itu kematian, apa pengadilan dan ganjaran sesudah maut, akhirnya apa itu misteri terakhir dan tak terungkapkan, yang menyelimuti keberadaan kita, darinya kita berasal dan kepadanya kita menuju?” -- Zaman Kita (no.1), Deklarasi Konsili Vatikan II tentang Sikap Gereja Katolik terhadap Agama-agama bukan Kristen, 1965.

Salah satu hasil renungan mengenai hal itu, yang berangkat dari sikap iman yang penuh taqwa kepada Allah, terdapat dalam Mazmur 8:

“Ya Tuhan, Allah kami, betapa mulianya namaMu diseluruh bumi!

KeagunganMu yang mengatasi langit dinyanyikan.

Mulut bayi-bayi dan anak-anak yang menyusu berbicara bagiMu, membungkam musuh dan lawanMu.

Jika aku melihat langitMu, buatan jariMu, bulan dan bintang yang Kautempatkan;

apakah manusia, sehingga Engkau mengingatnya?

Siapakah dia sehingga Engkau mengindahkannya? -- Namun Engkau telah membuatnya hampir sama seperti Allah, dan telah memahkotainya dengan kemuliaan dan hormat.

Engkau membuat dia berkuasa atas buatan tanganMu; segalanya telah Kauletakkan dibawah kakinya:

kambing domba dan lembu sapi sekalian,

juga binatang-binatang di padang;

burung-burung di udara dan ikan-ikan di laut,

dan apa yang melintasi arus lautan.

Ya Tuhan, Allah kami, betapa mulia namaMu di seluruh bumi!”

2. Manusia berfilsafat

Tetapi sudah sejak awal sejarah ternyata sikap iman penuh taqwa itu tidak menahan manusia menggunakan akal budi dan fikirannya untuk mencari tahu apa sebenarnya yang ada dibalik segala kenyataan (realitas) itu. Proses itu mencari tahu itu menghasilkan kesadaran, yang disebut pengetahuan. Jika proses itu memiliki ciri-ciri metodis, sistematis dan koheren, dan cara mendapatkannya dapat dipertanggung-jawabkan, maka lahirlah ilmu pengetahuan.

Ilmu pengetahuan adalah pengetahuan yang (1) disusun metodis, sistematis dan koheren (“bertalian”) tentang suatu bidang tertentu dari kenyataan (realitas), dan yang (2) dapat digunakan untuk menerangkan gejala-gejala tertentu di bidang (pengetahuan) tersebut.

Makin ilmu pengetahuan menggali dan menekuni hal-hal yang khusus dari kenyataan (realitas), makin nyatalah tuntutan untuk mencari tahu tentang seluruh kenyataan (realitas).

Filsafat adalah pengetahuan metodis, sistematis dan koheren tentang seluruh kenyataan (realitas). Filsafat merupakan refleksi rasional (fikir) atas keseluruhan realitas untuk mencapai hakikat (= kebenaran) dan memperoleh hikmat (= kebijaksanaan).

Al-Kindi (801 - 873 M) : "Kegiatan manusia yang bertingkat tertinggi adalah filsafat yang merupakan pengetahuan benar mengenai hakikat segala yang ada sejauh mungkin bagi manusia ... Bagian filsafat yang paling mulia adalah filsafat pertama, yaitu pengetahuan kebenaran pertama yang merupakan sebab dari segala kebenaran".

Unsur "rasional" (penggunaan akal budi) dalam kegiatan ini merupakan syarat mutlak, dalam upaya untuk mempelajari dan mengungkapkan "secara mendasar" pengembaraan manusia di dunianya menuju akhirat. Disebut "secara mendasar" karena upaya itu dimaksudkan menuju kepada rumusan dari sebab-musabab pertama, atau sebab-musabab terakhir, atau bahkan sebab-musabab terdalam dari obyek yang dipelajari ("obyek material"), yaitu "manusia di dunia dalam mengembara menuju akhirat". Itulah scientia rerum per causas ultimas -- pengetahuan mengenai hal ikhwal berdasarkan sebab-musabab yang paling dalam.

Karl Popper (1902-?) menulis "semua orang adalah filsuf, karena semua mempunyai salah satu sikap terhadap hidup dan kematian. Ada yang berpendapat bahwa hidup itu tanpa harga, karena hidup itu akan berakhir. Mereka tidak menyadari bahwa argumen yang terbalik juga dapat dikemukakan, yaitu bahwa kalau hidup tidak akan berakhir, maka hidup adalah tanpa harga; bahwa bahaya yang selalu hadir yang membuat kita dapat kehilangan hidup sekurang-kuran gnya ikut menolong kita untuk menyadari nilai dari hidup". Mengingat berfilsafat adalah berfikir tentang hidup, dan "berfikir" = "to think" (Inggeris) = "denken" (Jerman), maka - menurut Heidegger (1889-1976 ), dalam "berfikir" sebenarnya kita "berterimakasih" = "to thank" (Inggeris) = "danken" (Jerman) kepada Sang Pemberi hidup atas segala anugerah kehidupan yang diberikan kepada kita.

Menarik juga untuk dicatat bahwa kata "hikmat" bahasa Inggerisnya adalah "wisdom", dengan akar kata "wise" atau "wissen" (bahasa Jerman) yang artinya mengetahui. Dalam bahasa Norwegia itulah "viten", yang memiliki akar sama dengan kata bahasa Sansekerta "vidya" yang diindonesiakan menjadi "widya". Kata itu dekat dengan kata "widi" dalam "Hyang Widi" = Tuhan. Kata "vidya" pun dekat dengan kata Yunani "idea", yang dilontarkan pertama kali oleh Socrates/Plato dan digali terus-menerus oleh para filsuf sepanjang segala abad.

Menurut Aristoteles (384-322 sM), pemikiran kita melewati 3 jenis abstraksi (abstrahere = menjauhkan diri dari, mengambil dari). Tiap jenis abstraksi melahirkan satu jenis ilmu pengetahuan dalam bangunan pengetahuan yang pada waktu itu disebut filsafat:

Aras abstraksi pertama - fisika. Kita mulai berfikir kalau kita mengamati. Dalam berfikir, akal dan budi kita “melepaskan diri” dari pengamatan inderawi segi-segi tertentu, yaitu “materi yang dapat dirasakan” (“hyle aistete”). Dari hal-hal yang partikular dan nyata, ditarik daripadanya hal-hal yang bersifat umum: itulah proses abstraksi dari ciri-ciri individual. Akal budi manusia, bersama materi yang “abstrak” itu, menghasilan ilmu pengetahuan yang disebut “fisika” (“physos” = alam).

Aras abstraksi kedua - matesis. Dalam proses abstraksi selanjutnya, kita dapat melepaskan diri dari materi yang kelihatan. Itu terjadi kalau akal budi melepaskan dari materi hanya segi yang dapat dimengerti (“hyle noete”). Ilmu pengetahuan yang dihasilkan oleh jenis abstraksi dari semua ciri material ini disebut “matesis” (“matematika” – mathesis = pengetahuan, ilmu).

Aras abstraksi ketiga - teologi atau “filsafat pertama”. Kita dapat meng-"abstrahere" dari semua materi dan berfikir tentang seluruh kenyataan, tentang asal dan tujuannya, tentang asas pembentukannya, dsb. Aras fisika dan aras matematika jelas telah kita tinggalkan. Pemikiran pada aras ini menghasilkan ilmu pengetahuan yang oleh Aristoteles disebut teologi atau “filsafat pertama”. Akan tetapi karena ilmu pengetahuan ini “datang sesudah” fisika, maka dalam tradisi selanjutnya disebut metafisika.

Secara singkat, filsafat mencakup “segalanya”. Filsafat datang sebelum dan sesudah ilmu pengetahuan; disebut “sebelum” karena semua ilmu pengetahuan khusus mulai sebagai bagian dari filsafat dan disebut “sesudah” karena ilmu pengetahuan khusus pasti menghadapi pertanyaan tentang batas-batas dari kekhususannya.

3. Manusia berteologi

Teologi adalah: pengetahuan metodis, sistematis dan koheren tentang seluruh kenyataan berdasarkan iman. Secara sederhana, iman dapat didefinisikan sebagai sikap manusia dihadapan Allah, Yang mutlak dan Yang kudus, yang diakui sebagai Sumber segala kehidupan di alam semesta ini. Iman itu ada dalam diri seseorang antara lain melalui pendidikan (misalnya oleh orang tua), tetapi dapat juga melalui usaha sendiri, misalnya dengan cermat merenungkan hidupnya di hadapan Sang pemberi hidup itu. Dalam hal ini Allah dimengerti sebagai Realitas yang paling mengagumkan dan mendebarkan. Tentulah dalam arti terakhir itu berteologi adalah berfilsafat juga.

Iman adalah sikap batin. Iman seseorang terwujud dalam sikap, perilaku dan perbuatannya, terhadap sesamanya dan terhadap lingkungan hidupnya. Jika iman yang sama (apapun makna kata "sama" itu) ada pada dan dimiliki oleh sejumlah atau sekelompok orang, maka yang terjadi adalah proses pelembagaan. Pelembagaan itu misalnya berupa (1) tatacara bagaimana kelompok itu ingin mengungkapkan imannya dalam doa dan ibadat, (2) tatanilai dan aturan yang menjadi pedoman bagi penghayatan dan pengamalan iman dalam kegiatan sehari-hari, dan (3) tatanan ajaran atau isi iman untuk dikomunikasikan (disiarkan) dan dilestarikan. Jika pelembagaan itu terjadi, lahirlah agama. Karena itu agama adalah wujud sosial dari iman.

Catatan.

(1) Proses yang disebut pelembagaan itu adalah usaha yang sifatnya metodis, sistematis dan koheren atas kenyataan yang berupa kesadaran akan kehadiran Sang Realitas yang mengatasi hidup. Dalam konteks inilah kiranya kata akal ("'aql") dan kata ilmu ("'ilm") telah digunakan dalam teks Al Qur'an. Kedekatan kata 'ilm dengan kata sifat 'alim kata ulama kiranya juga dapat dimengerti. Periksalah pula buku Yusuf Qardhawi, "Al-Qur'an berbicara tentang akal dan ilmu pengetahuan", Gema Insani Press, 1998. Namun sekaligus juga harus dikatakan, bahwa kata "ilmu" itu dalam pengertian umum dewasa ini meski serupa namun tetap tak sama dengan makna kata "ilmu" dalam teks dan konteks Al-Qur'an itu.

(2) Proses terbentuknya agama sebagaimana diungkapkan disini pantas disebut sebagai pendekatan "dari bawah". Inisiatif seakan-akan berasal dari manusia, yang ingin menemukan hakekat hidupnya di dunia ini dikaitkan dengan Sang sumber hidup dan kehidupan. Manusia meniti dan menata hidupnya sesuai dengan hasil penemuannya. Pendekatan "dari atas" nyata pada agama-agama samawi: Allah mengambil inisiatif mewahyukan kehendakNya kepada manusia, dan oleh karena itu iman adalah tanggapan manusia atas "sapaan" Allah itu.

Sebagai ilmu, teologi merefleksikan hubungan Allah dan manusia. Manusia berteologi karena ingin memahami imannya dengan cara lebih baik, dan ingin mempertanggungjawabkannya: "aku tahu kepada siapa aku percaya" (2Tim 1:12). Teologi bukan agama dan tidak sama dengan Ajaran Agama. Dalam teologi, adanya unsur "intellectus quaerens fidem" (akal menyelidiki isi iman) diharapkan memberi sumbangan substansial untuk integrasi akal dan iman, iptek dan imtaq, yang pada gilirannya sangat bermanfaat bagi hidup manusia masa kini.

4. Obyek material dan obyek formal

Ilmu filsafat memiliki obyek material dan obyek formal. Obyek material adalah apa yang dipelajari dan dikupas sebagai bahan (materi) pembicaraan, yaitu gejala "manusia di dunia yang mengembara menuju akhirat". Dalam gejala ini jelas ada tiga hal menonjol, yaitu manusia, dunia, dan akhirat. Maka ada filsafat tentang manusia (antropologi), filsafat tentang alam (kosmologi), dan filsafat tentang akhirat (teologi - filsafat ketuhanan; kata "akhirat" dalam konteks hidup beriman dapat dengan mudah diganti dengan kata Tuhan). Antropologi, kosmologi dan teologi, sekalipun kelihatan terpisah, saling berkaitan juga, sebab pembicaraan tentang yang satu pastilah tidak dapat dilepaskan dari yang lain. Juga pembicaraan filsafat tentang akhirat atau Tuhan hanya sejauh yang dikenal manusia dalam dunianya.

Obyek formal adalah cara pendekatan yang dipakai atas obyek material, yang sedemikian khas sehingga mencirikan atau mengkhususkan bidang kegiatan yang bersangkutan. Jika cara pendekatan itu logis, konsisten dan efisien, maka dihasilkanlah sistem filsafat.

Filsafat berangkat dari pengalaman konkret manusia dalam dunianya. Pengalaman manusia yang sungguh kaya dengan segala sesuatu yang tersirat ingin dinyatakan secara tersurat. Dalam proses itu intuisi (merupakan hal yang ada dalam setiap pengalaman) menjadi basis bagi proses abstraksi, sehingga yang tersirat dapat diungkapkan menjadi tersurat.

Dalam filsafat, ada filsafat pengetahuan. "Segala manusia ingin mengetahui", itu kalimat pertama Aristoteles dalam Metaphysica. Obyek materialnya adalah gejala "manusia tahu". Tugas filsafat ini adalah menyoroti gejala itu berdasarkan sebab-musabab pertamanya. Filsafat menggali "kebenaran" (versus "kepalsuan"), "kepastian" (versus "ketidakpastian"), "obyektivitas" (versus "subyektivitas"), "abstraksi", "intuisi", dari mana asal pengetahuan dan kemana arah pengetahuan. Pada gilirannya gejala ilmu-ilmu pengetahuan menjadi obyek material juga, dan kegiatan berfikir itu (sejauh dilakukan menurut sebab-musabab pertama) menghasilkan filsafat ilmu pengetahuan. Kekhususan gejala ilmu pengetahuan terhadap gejala pengetahuan dicermati dengan teliti. Kekhususan itu terletak dalam cara kerja atau metode yang terdapat dalam ilmu-ilmu pengetahuan.

5. Cabang-cabang filsafat

5.1. Sekalipun bertanya tentang seluruh realitas, filsafat selalu bersifat "filsafat tentang" sesuatu: tentang manusia, tentang alam, tentang akhirat, tentang kebudayaan, kesenian, bahasa, hukum, agama, sejarah, ... Semua selalu dikembalikan ke empat bidang induk:

1. filsafat tentang pengetahuan:

obyek material : pengetahuan ("episteme") dan kebenaran

epistemologi;

logika;

kritik ilmu-ilmu;

2. filsafat tentang seluruh keseluruhan kenyataan:

obyek material : eksistensi (keberadaan) dan esensi (hakekat)

metafisika umum (ontologi);

metafisika khusus:

antropologi (tentang manusia);

kosmologi (tentang alam semesta);

teodise (tentang tuhan);

3. filsafat tentang nilai-nilai yang terdapat dalam sebuah tindakan:

obyek material : kebaikan dan keindahan

etika;

estetika;

4. sejarah filsafat.

5.2. Beberapa penjelasan diberikan disini khusus mengenai filsafat tentang pengetahuan. Dipertanyakan: Apa itu pengetahuan? Dari mana asalnya? Apa ada kepastian dalam pengetahuan, atau semua hanya hipotesis atau dugaan belaka?

Pertanyaan tentang kemungkinan-kemungkinan pengetahuan, batas-batas pengetahuan, asal dan jenis-jenis pengetahuan dibahas dalam epistemologi. Logika ("logikos") "berhubungan dengan pengetahuan", "berhubungan dengan bahasa". Disini bahasa dimengerti sebagai cara bagaimana pengetahuan itu dikomunikasikan dan dinyatakan. Maka logika merupakan cabang filsafat yang menyelidiki kesehatan cara berfikir serta aturan-aturan yang harus dihormati supaya pernyataan-pernyataan sah adanya.

Ada banyak ilmu, ada pohon ilmu-ilmu, yaitu tentang bagaimana ilmu yang satu berkait dengan ilmu lain. Disebut pohon karena dimengerti pastilah ada ibu (akar) dari semua ilmu. Kritik ilmu-ilmu mempertanyakan teori-teori dalam membagi ilmu-ilmu, metode-metode dalam ilmu-ilmu, dasar kepastian dan jenis keterangan yang diberikan.

5.3. Menurut cara pendekatannya, dalam filsafat dikenal ada banyak aliran filsafat: eksistensialisme, fenomenologi, nihilisme, materialisme, ... dan sebaginya.

5.4. Pastilah ada filsafat tentang agama, yaitu pemikiran filsafati (kritis, analitis, rasional) tentang gejala agama: hakekat agama sebagai wujud dari pengalaman religius manusia, hakikat hubungan manusia dengan Yang Kudus (Numen): adanya kenyataan trans-empiris, yang begitu mempengaruhi dan menentukan, tetapi sekaligus membentuk dan menjadi dasar tingkah-laku manusia. Yang Kudus itu dimengerti sebagai Mysterium Tremendum et Fascinosum; kepadaNya manusia hanya beriman, yang dapat diamati (oleh seorang pengamat) dalam perilaku hidup yang penuh dengan sikap "takut-dan-taqwa", wedi-lan-asih ing Panjenengane.

Sebegitu, maka tidak ada filsafat agama X; yang ada adalah filsafat dalam agama X, yaitu pemikiran menuju pembentukan infrastruktur rasional bagi ajaran agama X. Hubungan antara filsafat dengan agama X dapat diibaratkan sebagai hubungan antara jemaah haji dengan kendaraan yang ditumpangi untuk pergi haji ke Tanah Suci, dan bukan hubungan antara jemaah haji dengan iman yang ada dalam hati jemaah itu.

Catatan lain.

1. Iman dapat digambarkan mirip dengan gunung es di lautan. Yang tampak hanya sekitar sepersepuluh saja dari keseluruhannya. Karena iman adalah suasana hati, maka berlakulah peribahasa: "dalam laut dapat diduga, dalam hati siapa yang tahu". Tahukah saudara akan kadar keimanan saya?

2. Sekaligus juga patut ditanyakan "dimanakah letak hati yang dimaksudkan disini? Pastilah "hati" itu (misalnya dalam kata "sakit hati" jika seorang pemudi dibuat kecewa oleh sang pemuda yang menjadi pacarnya) bukan organ hati (dan kata "sakit hati" karena liver anda membengkak) yang diurus oleh para dokter di rumah sakit. Periksa pula apa yang tersirat dalam kata "batin", "kalbu", "berhati-hatilah", "jantung hati", "jatuh hati", "hati nurani", dan "suara hati".

3. Menurut Paul A Samuelson tirani kata merupakan gejala umum dalam masyarakat. Sering ada banyak kata dipakai untuk menyampaikan makna yang sama dan ada pula banyak makna terkait dalam satu kata. Manusia ditantang untuk berfikir dan berbicara dengan jelas dan terpilah-pilah ("clearly and distinctly"), sekurang-kurangnya untuk menghindarkan miskomunikasi dan menegakkan kebenaran. Itulah nasehat dari Rene Descrates. Bahkan kedewasaan seseorang dalam menghadapi persoalan (termasuk persoalan-persoalan dalam hidupnya) erat hubungannya dengan kemampuannya untuk berfikir dan berbicara dengan jelas dan terpilah-pilah tersebut.

6. Refleksi rasional dan refleksi imani

Ketika bangsa Yunani mulai membuat refleksi atas persoalan-persoalan yang sekarang menjadi obyek material dalam filsafat dan bahkan ketika hasil-hasil refleksi itu dibukukan dalam naskah-naskah yang sekarang menjadi klasik, bangsa Israel telah memiliki sejumlah naskah (yang sekarang dikenal sebagai bagian dari Alkitab yang disebut Perjanjian Lama). Naskah-naskah itu pada hakekatnya merupakan hasil refleksi juga, oleh para bapa bangsa itu tentang nasib dan keberuntungan bangsa Israel -- bagaimana dalam perjalanan sejarah sebagai "bangsa terpilih", mereka sungguh dituntun (bahkan sering pula dihardik dengan keras serta dihukum) oleh YHWH (dibaca: Yahwe), Allah mereka. Ikatan erat dengan tradisi dan ibadat telah menjadikan naskah-naskah itu Kitab Suci agama mereka (Agama Yahudi). Pada gilirannya, Kitab Suci itu pun memiliki posisi unik dalam Agama Kristiani.

Catatan.

Bangsa Israel (dan Israel dalam Alkitab) sebagaimana dimaksudkan diatas tidak harus dimengerti sama dengan bangsa Israel yang sekarang ada di wilayah geografis yang sekarang disebut "negara Israel".

Kedua refleksi itu berbeda dalam banyak hal. Refleksi tokoh-tokoh Yunani itu (misal Plato dan Aristoteles) mengandalkan akal dan merupakan cetusan penolakan mereka atas mitologi (faham yang menggambarkan dunia sebagai senantiasa dikuasai oleh para dewa dan dewi). Sebaliknya, refleksi para bapa bangsa Israel itu (misal: Musa yang umumnya diterima sebagai penulis 5 kitab pertama Perjanjian Lama) merupakan ditopang oleh kalbu karena merupakan cetusan penerimaan bangsa Israel atas peran Sang YHWH dalam keseluruhan nasib dan sejarah bangsa itu. Refleksi imani itu sungguh merupakan pernyataan universal pengakuan yang tulus, barangkali yang pertama dalam sejarah umat manusia, akan kemahakuasaan Allah dalam hidup dan sejarah manusia.

Sekarang ada yang berpendirian, bahwa hasil refleksi rasional para tokoh Yunani itu, berasimilasi dengan tradisi refleksi hidup keagamaan yang monoteistis, ternyata menjadi bibit bagi lahirnya ilmu-ilmu pengetahuan yang dikenal dewasa ini. Oleh karena itu sering filsafat dikatakan mengatasi setiap ilmu.

Sementara itu, harus dicatat bahwa dalam lingkungan kebudayaan India dan Cina berkembang pula refleksi bernuansa lain: wajah Asia. Refleksi itu nyata dalam buah pengetahuan yang terkumpul (misalnya dalam wujud "ilmu kedokteran alternatif" tusuk jarum), dan dalam karya-karya sastra "kaliber dunia" dari anak benua India. Karya-karya sastra itu sering diperlakukan sebagai kitab suci, atau dihormati sebagai Kitab Suci, karena diterima sebagai kitab yang penuh dengan hal-hal yang bernilai suci untuk menjadi pedoman hidup sehari-hari.

Misalnya saja Bhagavadgita (abad 4 seb Masehi). Bhagawadgita (atau Gita) diangkat dari epik Mahabharata, dari posisi sekunder (bagian dari sebuah cerita) ke posisi primer (sumber segala inspirasi untuk hidup). Pada abad 8 Masehi, Sankara (seorang guru) menginterpretasi Gita bukan sebagai pedoman untuk aksi, tetapi sebagai pedoman untuk "mokhsa", pembebasan dari keterikatan kepada dunia ini. Ramanuja (abad 12 Masehi) melihatnya sebagai sumber devosi atas kerahiman Tuhan yang hanya bisa dihayati melalui cinta. Pada masa perjuangan kemerdekaan sekitar tahun 40-an, Gita dilihat sebagai pedoman untuk ber-"dharma yuddha", perang penuh semangat menegakkan kebenaran terhadap penjajah yang tak adil. Bagi Tilak, Arjuna adalah "a man of action" ("karma yogin"), dan Gita mendorong seseorang untuk bertindak sedemikian sehingga ia menjadi "mokhsa" melalui "perjuangan" yang ditempuhnya. Aurobindo, Mahatma Gandhi, Bhave, Radhakrishnan, dan tokoh-tokoh lain membuat komentar yang kurang lebih sama. Tanpa interpretasi Tilak, misalnya, pergolakan di India pada waktu itu mudah dinilai sebagai bersifat politis murni (atau kriminal murni?), yaitu tanpa landasan ideal, spiritual, teologis dan etis.

Sesungguhnya, berefleksi merupakan ciri khas manusia sebagai pribadi dan dalam kelompok. Refleksi merupakan sarana untuk mengembangkan spiritualitas dan aktualisasi menjadi manusia yang utuh, dewasa dan mandiri. Melalui refleksi pula, manusia dan kelompok-kelompok manusia (yaitu suku dan bangsa) menemukan jati dirinya, menyadari tempatnya dalam dimensi ruang dan waktu (dalam sejarah), serta melaksanakan panggilannya untuk membuat sejarah bagi masa depan.

Catatan.

Adakah refleksi tentang realitas yang khas Indonesia? Suatu kajian berdasar naskah-naskah sastra Jawa masa lalu terdapat dalam disertasi doktor P J Zoetmulter SJ: "Manunggaling Kawula Gusti" (1935), yang telah diterjemahkan oleh Dick Hartoko SJ dan diterbitkan oleh PT Gramedia.


Pragmatism

From Wikipedia, the free encyclopedia

Jump to: navigation, search
Look up pragmatism in
Wiktionary, the free dictionary.

Pragmatism is a philosophic school generally considered to have originated in the late nineteenth century with Charles Peirce, who first stated the pragmatic maxim. It came to fruition in the early twentieth-century philosophies of William James and John Dewey. Most of the thinkers who describe themselves as pragmatists consider practical consequences or real effects to be vital components of both meaning and truth. Other important aspects of pragmatism include anti-Cartesianism, radical empiricism, instrumentalism, anti-realism, verificationism, conceptual relativity, a denial of the fact-value distinction, a high regard for science, and fallibilism.

Pragmatism began enjoying renewed attention from the 1950s on, because of a new school of philosophers who put forth a revised pragmatism that criticized the logical positivism that had dominated philosophy in the United States and Britain since the 1930s, notably in the work of analytic philosophers like W. V. O. Quine and Wilfrid Sellars. Their naturalized epistemology was further developed and widely publicized by Richard Rorty, whose later work grew closer to continental philosophy and is often considered relativistic. Contemporary pragmatism is still divided between those thinkers who work strictly within the analytic tradition, and a more relativistic strand in the wake of Rorty and lastly neoclassical pragmatists like Susan Haack who stay closer to the work of Peirce, James and Dewey.

Contents

[hide]

[edit] Origins

Charles Peirce: the American polymath who started it all.
Charles Peirce: the American polymath who started it all.

As a philosophical movement, pragmatism originated in the United States in the late 1800s. The thought and works of Charles Sanders Peirce (pronounced /ˈpɝs/) and William James (both members of The Metaphysical Club) as well as John Dewey and George Herbert Mead figured most prominently in its overall direction. The term pragmatism was first used in print by James, who credited Peirce with coining the term during the early 1870s. Prompted by James' use of the term and its attribution to him, Peirce began writing and lecturing on pragmatism to make clear his own interpretation. Peirce eventually coined the new name pragmaticism to mark what he regarded as the original idea, for clarity's sake and possibly (but not certainly) because he disagreed with James (cf. Menand 2001 on the former interpretation; below on the latter). He claimed that the term was so ugly, nobody would be tempted to steal it (Haack 1998).

James and Peirce were inspired by several earlier thinkers, notably Alexander Bain, who examined the crucial links among belief, conduct, and disposition by saying that a belief is a proposition on which a person is prepared to act. Earlier thinkers that inspired the pragmatists include Francis Bacon who coined the phrase "knowledge is power", David Hume for his naturalistic account of knowledge and action, Thomas Reid for his direct realism, Immanuel Kant for his idealism and from whom Peirce derives the name "pragmatism", Georg Hegel for his introduction of temporality into philosophy (Pinkard in Misak 2007), and J.S. Mill for his nominalism and empiricism.

[edit] Pragmatist epistemology

The epistemology of the early pragmatists was heavily influenced by Darwinian thinking. Pragmatists were not the first to see the relevance of evolution for theories of knowledge: the same rationale had for example convinced Schopenhauer that we should adopt biological idealism because what's useful to an organism to believe might differ wildly from what is actually true. Pragmatism differs from this idealist account because it challenges the assumption that knowledge and action are two separate spheres, and that there exists an absolute or transcendental truth above and beyond the sort of inquiry that organisms use to cope with life. Pragmatism, in short, provides what might be termed an ecological account of knowledge: inquiry is construed as a means by which organisms can get a grip on their environment. 'Real' and 'true' are labels that have a function in inquiry and cannot be understood outside of that context. It is not realist in a traditional robust sense of realism (what Hilary Putnam would later call metaphysical realism), but it is realist in that it acknowledges an external world which must be dealt with.

John Dewey says something is "made true" when it is verified. Contrary to what some critics think, he does not mean that people are free to construct a worldview as they see fit.
John Dewey says something is "made true" when it is verified. Contrary to what some critics think, he does not mean that people are free to construct a worldview as they see fit.

A general tendency by philosophers to push all views into either the idealist or realist camp, as well as William James' occasional penchant for eloquence at the expense of public understanding, resulted in the widespread but false characterization of pragmatism as a form of subjectivism or idealism. Many of James' best-turned phrases — "truth's cash value" (James 1907, p. 200) and "the true is only the expedient in our way of thinking" (James 1907, p. 222) — were taken out of context and caricatured in contemporary literature as representing the view that any idea that has practical utility is true. William James writes:

It is high time to urge the use of a little imagination in philosophy. The unwillingness of some of our critics to read any but the silliest of possible meanings into our statements is as discreditable to their imaginations as anything I know in recent philosophic history. Schiller says the truth is that which 'works.' Thereupon he is treated as one who limits verification to the lowest material utilities. Dewey says truth is what gives 'satisfaction'! He is treated as one who believes in calling everything true which, if it were true, would be pleasant. (James 1907, p. 90)

In reality, James asserts, the theory is a great deal more subtle. (See Dewey 1910 for a 'FAQ')

Pragmatists do disagree with the view that beliefs must represent reality to be true - "Copying is one [and only one] genuine mode of knowing" says James (James 1907, p. 91) - and argue that beliefs are dispositions which qualify as true or false depending on how helpful they prove in inquiry and in action. It is only in the struggle of intelligent organisms with the surrounding environment that theories acquire meaning, and only with a theory's success in this struggle that it becomes true. However most pragmatists do not hold that anything that is practical or useful, or that anything that helps to survive merely in the short term, should be regarded as true. For example, to believe that my cheating spouse is faithful may help me feel better now, but it is certainly not useful from a more long-term perspective because it doesn't accord with the facts (and is therefore not true).

[edit] Concept of truth

Going back to James, pragmatists have often spoken of how truth is not ready-made, but that jointly we and reality "make" truth. This idea has two senses, one which is often attributed to William James and F.C.S. Schiller, and another that is more widely accepted by pragmatists: (1) that truth is mutable, and (2) truth is relative to a conceptual scheme.

(1) Mutability of truth

One major difference among the pragmatists about the definition of 'truth' is the question of whether beliefs can pass from being true to being untrue and back. For James, beliefs are not true until they have been made true by verification. James believed propositions become true over the long term through proving their utility in a person's specific situation. The opposite of this process is not falsification, but rather a belief ceasing to be a "live option." F.C.S. Schiller, on the other hand, very clearly asserted that beliefs could pass into and out of truth situationally. Schiller held that truth was relative to specific problems. If I want to know how to return home safely, the true answer will be whatever is useful to solving that problem. Later on, when faced with a different problem, what I came to believe when faced with the earlier problem may now be false. As my problems change and as the most useful way to solve a problem shifts, so does the property of truth.

C.S. Peirce thought the idea that beliefs could be true at one time but false at another (or true for one person but false for another) was one of the "seeds of death"[1] by which James allowed his pragmatism to become "infected." Peirce avoided this position because he took the pragmatic theory to imply that theoretical claims should be tied to verification practices (i.e. they should be subject to test), not that they should be tied to our specific problems or life needs. Truth is defined, for Peirce, as what would be the ultimate outcome (not any outcome in real time) of inquiry by a (usually scientific) community of investigators. John Dewey, while agreeing broadly with this definition, also characterized truthfulness as a species of the good: to state that something is true means stating that it is trustworthy or reliable and will remain so in every conceivable situation. Both Peirce and Dewey clearly connect the definitions of truth and warranted assertability. Hilary Putnam also developed his internal realism around the idea that a belief is true if it is ideally epistemically justified. About James' and Schiller's account, Putnam says this:

Truth cannot simply be rational acceptability for one fundamental reason; truth is supposed to be a property of a statement that cannot be lost, whereas justification can be lost. The statement 'The earth is flat' was, very likely, rationally acceptable 3000 years ago; but it is not rationally acceptable today. Yet it would be wrong to say that 'the earth is flat' was true 3,000 years ago; for that would mean that the earth has changed its shape. (Putnam 1981, p. 55)

Rorty has also weighed in against James and Schiller:

Truth is, to be sure, an absolute notion, in the following sense: "true for me but not for you" and "true in my culture but not in yours" are weird, pointless locutions. So is "true then, but not now." [...] James would, indeed, have done better to say that phrases like "the good in the way of belief" and "what it is better for us to believe" are interchangeable with "justified" rather than with "true." (Rorty 1998, p. 2)

(2) Conceptual Relativity

Part of what James and Schiller mean by the phrase 'making truth' is their idea that we make things true by verifying them. This sense of 'making truth' has not been adopted by many other pragmatists. However, there is another sense to this phrase that nearly all pragmatists do adopt. It is the idea that there can be no truths without a conceptual scheme to express those truths. That is,

Unless we decide upon how we are going to use concepts like 'object', 'existence' etc., the question 'how many objects exist' does not really make any sense. But once we decide the use of these concepts, the answer to the above-mentioned question within that use or 'version', to put in Nelson Goodman's phrase, is no more a matter of 'convention'. (Maitra 2003 p. 40)

F.C.S. Schiller used the analogy of a chair to make clear what he meant by the phrase that truth is made: just as a carpenter makes a chair out of existing materials and doesn't create it out of nothing, truth is a transformation of our experience but that doesn't imply reality is something we're free to construct or imagine as we please.

[edit] Central pragmatist tenets

[edit] The primacy of practice

The pragmatist proceeds from the basic premise that the human capability of theorizing is integral to intelligent practice. Theory and practice are not separate spheres; rather, theories and distinctions are tools or maps for finding our way in the world. As John Dewey put it, there is no question of theory versus practice but rather of intelligent practice versus uninformed, stupid practice and noted in a conversation with William Pepperell Montague that "[h]is effort had not been to practicalize intelligence but to intellectualize practice". (Quoted in Eldridge 1998, p. 5) Theory is an abstraction from direct experience and ultimately must return to inform experience in turn. Thus an organism navigating his or her environment is the grounds for pragmatist inquiry.

[edit] Anti-reification of concepts and theories

Dewey, in The Quest For Certainty, criticized what he called "the philosophical fallacy": philosophers often take categories (such as the mental and the physical) for granted because they don't realize that these are merely nominal concepts that were invented to help solve specific problems. This causes metaphysical and conceptual confusion. Various examples are the "ultimate Being" of Hegelian philosophers, the belief in a "realm of value", the idea that logic, because it is an abstraction from concrete thought, has nothing to do with the act of concrete thinking, and so on. David L. Hildebrand sums up the problem: "Perceptual inattention to the specific functions comprising inquiry led realists and idealists alike to formulate accounts of knowledge that project the products of extensive abstraction back onto experience." (Hildebrand 2003)

[edit] Naturalism and anti-Cartesianism

From the outset, pragmatists wanted to reform philosophy and bring it more in line with the scientific method as they understood it. They argued that idealist and realist philosophy had a tendency to present human knowledge as something beyond what science could grasp. These philosophies then resorted either to a phenomenology inspired by Kant or to correspondence theories of knowledge and truth. Pragmatists criticized the former for its a priorism, and the latter because it takes correspondence as an unanalyzable fact. Pragmatism instead tries to explain, psychologically and biologically, how the relation between knower and known 'works' in the world.

In "The Fixation of Belief" (1877), C.S. Peirce denied that introspection and intuition (staple philosophical tools at least since Descartes) were valid methods for philosophical investigation. He argued that intuition could lead to faulty reasoning, e.g. when we reason intuitively about infinity. Furthermore, introspection does not give privileged access to knowledge about the mind - the self is a concept that is derived from our interaction with the external world and not the other way around. (De Waal 2005, pp. 7-10) By the time of his Harvard Lectures in 1903, however, he had become convinced that pragmatism and epistemology in general could not be derived from principles of psychology: what we do think is too different from what we should think. This is an important point of disagreement with most other pragmatists, who advocate a more thorough naturalism and psychologism.

Richard Rorty expanded on these and other arguments in Philosophy and the Mirror of Nature in which he criticized attempts by many philosophers of science to carve out a space for epistemology that is entirely unrelated to - and sometimes thought of as superior to - the empirical sciences. W.V. Quine, instrumental in bringing naturalized epistemology back into favor with his essay Epistemology Naturalized (Quine 1969), also criticized 'traditional' epistemology and its "Cartesian dream" of absolute certainty. The dream, he argued, was impossible in practice as well as misguided in theory because it separates epistemology from scientific inquiry.

Hilary Putnam asserts that the combination of antiskepticism and fallibilism is a central feature of pragmatism.
Hilary Putnam asserts that the combination of antiskepticism and fallibilism is a central feature of pragmatism.

[edit] The reconciliation of anti-skepticism and fallibilism

Hilary Putnam suggests that the reconciliation of antiskepticism and fallibilism is the central goal of American pragmatism. Although all human knowledge is partial, with no ability to take a 'God's-eye-view,' this does not necessitate a globalized skeptical attitude. Peirce insisted that contrary to Descartes' famous and influential methodology in the Meditations on First Philosophy, doubt cannot be feigned or created for the purpose of conducting philosophical inquiry. Doubt, like belief, requires justification. It arises from confrontation with some specific recalcitrant matter of fact (which Dewey called a 'situation'), which unsettles our belief in some specific proposition. Inquiry is then the rationally self-controlled process of attempting to return to a settled state of belief about the matter. Note that anti-skepticism is a reaction to modern academic skepticism in the wake of Descartes. The pragmatist insistence that all knowledge is tentative is actually quite congenial to the older skeptical tradition.

[edit] Pragmatism in other fields of philosophy

While pragmatism started out simply as a criterion of meaning, it quickly expanded to become a full-fledged epistemology with wide-ranging implications for the entire philosophical field. Pragmatists who work in these fields share a common inspiration, but their work is diverse and there are no received views.

[edit] Philosophy of science

In the philosophy of science, instrumentalism is the view that concepts and theories are merely useful instruments whose worth is measured not by whether the concepts and theories somehow mirror reality, but by how effective they are in explaining and predicting phenomena. Instrumentalism does not state that truth doesn't matter, but rather provides a specific answer to the question of what truth and falsity mean and how they function in science.

One of C.I. Lewis' main arguments in Mind and the World Order: Outline of a Theory of Knowledge was that science does not merely provide a copy of reality but must work with conceptual systems and that those are chosen for pragmatic reasons, that is, because they aid inquiry. Lewis' own development of multiple modal logics is a case in point. Lewis is sometimes called a 'conceptual pragmatist' because of this. (Lewis 1929)

Another development is the cooperation of logical positivism and pragmatism in the works of Charles W. Morris and Rudolph Carnap. The influence of pragmatism on these writers is mostly limited to the incorporation of the pragmatic maxim into their epistemology. Pragmatists with a broader conception of the movement don't often refer to them.

W. V. Quine's paper "Two Dogmas of Empiricism," published 1951, is one of the most celebrated papers of twentieth-century philosophy in the analytic tradition. The paper is an attack on two central tenets of the logical positivists' philosophy. One is the distinction between analytic truths, statements which are true simply in value of the meanings of their words ('all bachelors are unmarried'), and synthetic truths, which are grounded in empirical fact. The other is reductionism, the theory that each meaningful statement gets its meaning from some logical construction of terms which refers exclusively to immediate experience. Quine's argument brings to mind Peirce's insistence that axioms aren't a priori truths but synthetic statements.

[edit] Logic

Later in his life Schiller became famous for his attacks on logic in his textbook "Formal Logic." By then, Schiller's pragmatism had become the nearest of any of the classical pragmatists to an ordinary language philosophy. Schiller sought to undermine the very possibility of formal logic, by showing that words only had meaning when used in an actual context. The least famous of Schiller's main works was the constructive sequel to his destructive book "Formal Logic." In this sequel, "Logic for Use," Schiller attempted to construct a new logic to replace the formal logic he had just decimated in "Formal Logic." What he offers is something philosophers would recognize today as a logic covering the context of discovery and the hypothetico-deductive method.

Whereas F.C.S. Schiller actually dismissed the possibility of formal logic, most pragmatists are critical rather of its pretension to ultimate validity and see logic as one logical tool among others - or perhaps, considering the multitude of formal logics, one set of tools among others. This is the view of C.I. Lewis. C.S. Peirce developed multiple methods for doing formal logic.

Stephen Toulmin's The Uses of Argument inspired scholars in informal logic and rhetoric studies (although it is actually an epistemological work).

[edit] Metaphysics

James and Dewey were empirical thinkers in the most straightforward fashion: experience is the ultimate test and experience is what needs to be explained. They were dissatisfied with ordinary empiricism because in the tradition dating from Hume, empiricists had a tendency to think of experience as nothing more than individual sensations. To the pragmatists, this went against the spirit of empiricism: we should try to explain all that is given in experience including connections and meaning, instead of explaining them away and positing sense data as the ultimate reality. Radical empiricism, or Immediate Empiricism in Dewey's words, wants to give a place to meaning and value instead of explaining them away as subjective additions to a world of whizzing atoms.

The "Chicago Club" including Whitehead, Mead and Dewey. Pragmatism is sometimes called American Pragmatism because so many of its proponents were and are Americans - a fact some think is significant.
The "Chicago Club" including Whitehead, Mead and Dewey. Pragmatism is sometimes called American Pragmatism because so many of its proponents were and are Americans - a fact some think is significant.

William James gives an interesting example of this philosophical shortcoming:

[A young graduate] began by saying that he had always taken for granted that when you entered a philosophic classroom you had to open relations with a universe entirely distinct from the one you left behind you in the street. The two were supposed, he said, to have so little to do with each other, that you could not possibly occupy your mind with them at the same time. The world of concrete personal experiences to which the street belongs is multitudinous beyond imagination, tangled, muddy, painful and perplexed. The world to which your philosophy-professor introduces you is simple, clean and noble. The contradictions of real life are absent from it. [...] In point of fact it is far less an account of this actual world than a clear addition built upon it [...] It is no explanation of our concrete universe (James 1907, pp. 8-9)

F.C.S. Schiller's first book, "Riddles of the Sphinx", was published before he became aware of the growing pragmatist movement taking place in America. In it, Schiller argues for a middle ground between materialism and absolute metaphysics. The result of the split between these two explanatory schemes that are comparable to what William James called tough-minded empiricism and tender-minded rationalism, Schiller contends, is that mechanicistic naturalism cannot make sense of the "higher" aspects of our world (freewill, consciousness, purpose, universals and some would add God), while abstract metaphysics cannot make sense of the "lower" aspects of our world (the imperfect, change, physicality). While Schiller is vague about the exact sort of middle ground he is trying to establish, he suggests metaphysics as a tool that can aid inquiry and is only valuable insofar as it actually does help in explanation.

In the second half of the twentieth century, Stephen Toulmin argued that the need to distinguish between reality and appearance only arises within an explanatory scheme and therefore that there is no point in asking what 'ultimate reality' consists of. More recently, a similar idea has been suggested by the postanalytical philosopher Daniel Dennett, who argues that anyone who wants to understand the world has to adopt the intentional stance and acknowledge both the 'syntactical' aspects of reality (i.e. whizzing atoms) and its emergent or 'semantic' properties (i.e. meaning and value).

Radical Empiricism gives interesting answers to questions about the limits of science if there are any, the nature of meaning and value and the workability of reductionism. These questions feature prominently in current debates about the relationship between religion and science, where it is often assumed - most pragmatists would disagree - that science degrades everything that is meaningful into 'merely' physical phenomena.

[edit] Philosophy of mind

Both John Dewey in Nature and Experience (1929) and half a century later Richard Rorty in his monumental Philosophy and the Mirror of Nature (1979) argued that much of the debate about the relation of the mind to the body results from conceptual confusions. They argue instead that there is no need to posit the mind or mindstuff as an ontological category.

Pragmatists disagree over whether philosophers ought to adopt a quietist or a naturalist stance toward the mind-body problem. The former (Rorty among them) want to do away with the problem because they believe it's a pseudo-problem, whereas the latter believe that it is a meaningful empirical question.

[edit] Ethics

Pragmatism sees no fundamental difference between practical and theoretical reason, nor any ontological difference between facts and values. Both facts and values have cognitive content: knowledge is what we should believe; values are hypotheses about what is good in action. Pragmatist ethics is broadly humanist because it sees no ultimate test of morality beyond what matters for us as humans. Good values are those for which we have good reasons, viz. the Good Reasons approach. The pragmatist formulation pre-dates those of other philosophers who have stressed important similarities between values and facts such as Jerome Schneewind and John Searle.

William James tried to show the meaningfulness of (some kinds of) spirituality but, like other pragmatists, refused to see religion as the basis of meaning or morality.
William James tried to show the meaningfulness of (some kinds of) spirituality but, like other pragmatists, refused to see religion as the basis of meaning or morality.

William James' contribution to ethics, as laid out in his essay The Will to Believe has often been misunderstood as a plea for relativism or irrationality. On its own terms it argues that ethics always involves a certain degree of trust or faith and that we cannot always wait for adequate proof when making moral decisions.

Moral questions immediately present themselves as questions whose solution cannot wait for sensible proof. A moral question is a question not of what sensibly exists, but of what is good, or would be good if it did exist. [...] A social organism of any sort whatever, large or small, is what it is because each member proceeds to his own duty with a trust that the other members will simultaneously do theirs. Wherever a desired result is achieved by the co-operation of many independent persons, its existence as a fact is a pure consequence of the precursive faith in one another of those immediately concerned. A government, an army, a commercial system, a ship, a college, an athletic team, all exist on this condition, without which not only is nothing achieved, but nothing is even attempted. (James 1896)

Of the classical pragmatists, John Dewey wrote most extensively about morality and democracy. (Edel 1993) In his classic article Three Independent Factors in Morals (Dewey 1930), he tried to integrate three basic philosophical perspectives on morality: the right, the virtuous and the good. He held that while all three provide meaningful ways to think about moral questions, the possibility of conflict among the three elements cannot always be easily solved. (Anderson, SEP)

Dewey also criticized the dichotomy between means and ends which he saw as responsible for the degradation of our everyday working lives and education, both conceived as merely a means to an end. He stressed the need for meaningful labor and a conception of education that viewed it not as a preparation for life but as life itself. (Dewey 2004 [1910] ch. 7; Dewey 1997 [1938], p. 47)

Dewey was opposed to other ethical philosophies of his time, notably the emotivism of Alfred Ayer. Dewey envisioned the possibility of ethics as an experimental discipline, and thought values could best be characterized not as feelings or imperatives, but as hypotheses about what actions will lead to satisfactory results or what he termed consummatory experience. A further implication of this view is that ethics is a fallible undertaking, since human beings are frequently unable to know what would satisfy them.

A recent pragmatist contribution to meta-ethics is Todd Lekan's "Making Morality" (Lekan 2003). Lekan argues that morality is a fallible but rational practice and that it has traditionally been misconceived as based on theory or principles. Instead, he argues, theory and rules arise as tools to make practice more intelligent.

[edit] Aesthetics

John Dewey's Art as Experience, based on the William James lectures he delivered at Harvard, was an attempt to show the integrity of art, culture and everyday experience. (Field, IEP) Art, for Dewey, is or should be a part of everyone's creative lives and not just the privilege of a select group of artists. He also emphasizes that the audience is more than a passive recipient. Dewey's treatment of art was a move away from the transcendental approach to aesthetics in the wake of Immanuel Kant who emphasized the unique character of art and the disinterested nature of aesthetic appreciation.

A notable contemporary pragmatist aesthetician is Joseph Margolis. He defines a work of art as "a physically embodied, culturally emergent entity", a human "utterance" that isn't an ontological quirk but in line with other human activity and culture in general. He emphasizes that works of art are complex and difficult to fathom, and that no determinate interpretation can be given.

[edit] Philosophy of religion

Both Dewey and James have investigated the role that religion can still play in contemporary society, the former in A Common Faith and the latter in The Varieties of Religious Experience.

It should be noted, from a general point of view, that for William James, something is true only insofar as it works. Thus, the statement, for example, that prayer is heard may work on a psychological level but (a) will not actually help to bring about the things you pray for (b) may be better explained by referring to its soothing effect than by claiming prayers are actually heard. As such, pragmatism isn't antithetical to religion but it isn't an apologetic for faith either.

Joseph Margolis, in Historied Thought, Constructed World (California, 1995), makes a distinction between "existence" and "reality". He suggests using the term "exists" only for those things which adequately exhibit Pierce's Secondness: things which offer brute physical resistance to our movements. In this way, such things which affect us, like numbers, may be said to be "real", though they do not "exist". Margolis suggests that God, in such a linguistic usage, might very well be "real", causing believers to act in such and such a way, but might not "exist".

[edit] Analytical, neoclassical and neopragmatism

Neopragmatism is a broad contemporary category used for various thinkers, some of them radically opposed to one another. The name neopragmatist signifies that the thinkers in question incorporate important insights of, and yet significantly diverge from, the classical pragmatists. This divergence may occur either in their philosophical methodology (many of them are loyal to the analytic tradition) or in actual conceptual formation (C.I. Lewis was very critical of Dewey; Richard Rorty dislikes Peirce). Important analytical neopragmatists include the aforementioned Lewis, W.V.O. Quine, Donald Davidson, Hilary Putnam and the early Richard Rorty. Stanley Fish, the later Rorty and Jürgen Habermas are closer to continental thought.

Neoclassical pragmatism denotes those thinkers who consider themselves inheritors of the project of the classical pragmatists. Sidney Hook and Susan Haack (known for the theory of foundherentism) are well-known examples.

Not all pragmatists are easily characterized. It is probable, considering the advent of postanalytic philosophy and the diversification of Anglo-American philosophy, that more philosophers will be influenced by pragmatist thought without necessarily publicly committing themselves to that philosophical school. Daniel Dennett, a student of Quine's, falls into this category, as does Stephen Toulmin, who arrived at his philosophical position via Wittgenstein, whom he calls "a pragmatist of a sophisticated kind" (foreword for Dewey 1929 in the 1988 edition, p. xiii). Another example is Mark Johnson whose embodied philosophy (Lakoff and Johnson 1999) shares its psychologism, direct realism and anti-cartesianism with pragmatism. Conceptual pragmatism is a theory of knowledge originating with the work of the philosopher and logician Clarence Irving Lewis. The epistemology of conceptual pragmatism was first formulated in the 1929 book Mind and the World Order: Outline of a Theory of Knowledge.

'French Pragmatism' is attended with theorists like Bruno Latour, Michel Crozier and Luc Boltanski and Laurent Thévenot. It is often seen as opposed to structural problems connected to the French Critical Theory of Pierre Bourdieu.

[edit] Contemporary echoes and ties

In the twentieth century, the movements of logical positivism and ordinary language philosophy have similarities with pragmatism. Like pragmatism, logical positivism provides a verification criterion of meaning that is supposed to rid us of nonsense metaphysics. However, logical positivism doesn't stress action like pragmatism does. Furthermore, the pragmatists rarely used their maxim of meaning to rule out all metaphysics as nonsense. Usually, pragmatism was put forth to correct metaphysical doctrines or to construct empirically verifiable ones rather than to provide a wholesale rejection.

Ordinary language philosophy is closer to pragmatism than other philosophy of language because of its nominalist character and because it takes the broader functioning of language in an environment as its focus instead of investigating abstract relations between language and world.

Pragmatism has ties to process philosophy. Much of their work developed in dialogue with process philosophers like Henri Bergson and Alfred North Whitehead, who aren't usually considered pragmatists because they differ so much on other points. (Douglas Browning et al. 1998; Rescher, SEP)

Behaviorism and functionalism in psychology and sociology also have ties to pragmatism, which is not surprising considering that James and Dewey were both scholars of psychology and that Mead became a sociologist.

Utilitarianism has some significant parallels to Pragmatism and John Stuart Mill espoused similar values.

[edit] Criticism

Although many later pragmatists such as W.V.O. Quine were actually analytic philosophers, the most vehement criticisms of classical pragmatism came from within the analytic school. Bertrand Russell was especially known for his vituperative attacks on what he considered little more than epistemological relativism and short-sighted practicalism. Realists in general often could not fathom how pragmatists could seriously call themselves empirical or realist thinkers and thought pragmatist epistemology was only a disguised manifestation of idealism. (Hildebrand 2003)

Louis Menand argues[2] that during the Cold War, the intellectual life of the United States became dominated by ideologies. Since pragmatism seeks "to avoid the violence inherent in abstraction," it was not very popular at the time.

Neopragmatism as represented by Richard Rorty has been criticized as relativistic both by neoclassical pragmatists such as Susan Haack (Haack 1997) and by many analytic philosophers (Dennett 1998). Rorty's early analytical work, however, differs notably from his later work which some, including Rorty himself, consider to be closer to literary criticism than to philosophy - most criticism is aimed at this latter phase of Rorty's thought.

[edit] A list of pragmatists

[edit] Classical pragmatists (1850-1950)

Important protopragmatists or related thinkers

Fringe figures

  • Giovanni Papini (1881-1956): Italian essayist, mostly known because James occasionally mentioned him.
  • Giovanni Vailati (1863-1909): Italian analytic and pragmatist philosopher.

[edit] Neoclassical pragmatists (1950-)

Neoclassical pragmatists stay closer to the project of the classical pragmatists than neopragmatists do.

  • Sidney Hook (1902-1989): a prominent New York intellectual and philosopher, a student of Dewey at Columbia.
  • Isaac Levi (1930): seeks to apply pragmatist thinking in a decision-theoretic perspective.
  • Susan Haack (1945): teaches at the University of Miami, sometimes called the intellectual granddaughter of C.S. Peirce, known chiefly for foundherentism.
  • Larry Hickman: philosopher of technology and important Dewey scholar as head of the Center for Dewey Studies.
  • David Hildebrand: like other scholars of the classical pragmatists, Hildebrandt is dissatisfied with neopragmatism and argues for the continued importance of the writings of John Dewey.
  • Nicholas Rescher

[edit] Analytical, neo- and other pragmatists (1950-)

(Often labelled neopragmatism as well.)

[edit] Other pragmatists

Legal pragmatists

Pragmatists in the extended sense

[edit] Bibliography

IEP Internet Encyclopedia of Philosophy SEP Stanford Encyclopedia of Philosophy


  • Elizabeth Anderson. Dewey's Moral Philosophy. Stanford Encyclopedia of Philosophy.
  • Douglas Browning, William T. Myers (Eds.) Philosophers of Process. 1998.
  • Robert Burch. Charles Sanders Peirce. Stanford Encyclopedia of Philosophy.
  • John Dewey. Donald F. Koch (ed.) Lectures on Ethics 1900–1901. 1991.
  • Daniel Dennett. Postmodernism and Truth. 1998.
  • John Dewey. The Quest for Certainty: A Study of the Relation of Knowledge and Action. 1929.
  • John Dewey. Three Independent Factors in Morals. 1930.
  • John Dewey. The Influence of Darwin on Philosophy and Other Essays. 1910.
  • John Dewey. Experience & Education. 1938.
  • Cornelis De Waal. On Pragmatism. 2005.
  • Abraham Edel. Pragmatic Tests and Ethical Insights. In: Ethics at the Crossroads: Normative Ethics and Objective Reason. George F. McLean, Richard Wollak (eds.) 1993.
  • Michael Eldridge. Transforming Experience: John Dewey's Cultural Instrumentalism. 1998.
  • Richard Field. John Dewey (1859-1952). Internet Encyclopedia of Philosophy.
  • David L. Hildebrand. Beyond Realism & Anti-Realism. 2003.
  • David L. Hildebrand. The Neopragmatist Turn. Southwest Philosophy Review Vol. 19, no. 1. January, 2003.
  • William James. Pragmatism, A New Name for Some Old Ways of Thinking, Popular Lectures on Philosophy. 1907.
  • William James The Will to Believe. 1896.
  • George Lakoff and Mark Johnson. Philosophy in the Flesh : The Embodied Mind and Its Challenge to Western Thought. 1929.
  • Todd Lekan. Making Morality: Pragmatist Reconstruction in Ethical Theory. 2003.
  • C.I. Lewis. Mind and the World Order: Outline of a Theory of Knowledge. 1929.
  • Keya Maitra. On Putnam. 2003.
  • Joseph Margolis. Historied Thought, Constructed World. 1995.
  • Louis Menand. The Metaphysical Club. 2001.
  • Hilary Putnam Reason, Truth and History. 1981.
  • W.V.O. Quine. Two Dogmas of Empiricism. Philosophical Review. January 1951.
  • W.V.O. Quine Ontological Relativity and Other Essays. 1969.
  • N. Rescher. Process Philosophy. The Stanford Encyclopedia of Philosophy.
  • Richard Rorty Rorty Truth and Progress: Philosophical Papers. Volume 3. 1998.
  • Stephen Toulmin. The Uses of Argument. 1958.

[edit] Notes and other sources

Papers and online encyclopedias are part of the bibliography. Other sources may include interviews, reviews and websites.

  • Gary A. Olson and Stephen Toulmin. Literary Theory, Philosophy of Science, and Persuasive Discourse: Thoughts from a Neo-premodernist. Interview in JAC 13.2. 1993.
  • Susan Haack. Vulgar Rortyism. Review in The New Criterion. November 1997.
  • Pietarinen, A.V. “Interdisciplinarity and Peirce's classification of the Sciences: A Centennial Reassessment," Perspectives on Science, 14(2), 127-152 (2006).
  1. ^ See Peirce's 1908 "A Neglected Argument for the Reality of God", final paragraph.
  2. ^ Harvard Gazette Feb 26 2004 [1]

[edit] Resources

Important introductory primary texts
Note that this is an introductory list: some important works are left out and some less monumental works that are excellent introductions are included.

  • C.S. Peirce, How to Make Our Ideas Clear (paper)
  • C.S. Peirce, A Definition of Pragmatism (paper)
  • William James, Pragmatism (especially lectures I, II and VI)
  • John Dewey, Reconstruction in Philosophy
  • John Dewey, Three Independent factors in Morals (paper)
  • John Dewey, A short catechism concerning truth (chapter)
  • W.V.O. Quine, Three Dogmas of Empiricism (paper)

Secondary texts

  • Cornelis De Waal, On Pragmatism
  • Louis Menand, The Metaphysical Club
  • Hilary Putnam, Pragmatism: An Open Question
  • Abraham Edel, Pragmatic Tests and Ethical Insights
  • D. S. Clarke, Rational Acceptance and Purpose
  • Haack, Susan & Lane, Robert, Eds. (2006). Pragmatism Old and New: Selected Writings. New York: Prometheus Books.
  • Louis Menand, ed., Pragmatism: A Reader (includes essays by Peirce, James, Dewey, Rorty, others)

Journals
There are several peer-reviewed journals dedicated to pragmatism, for example

Online resources


discourse analysis





Tidak ada komentar: