Read PDF Explaining Games: The Epistemic Programme in Game Theory: 346 (Synthese Library)

Free download. Book file PDF easily for everyone and every device. You can download and read online Explaining Games: The Epistemic Programme in Game Theory: 346 (Synthese Library) file PDF Book only if you are registered here. And also you can download or read online all Book PDF file that related with Explaining Games: The Epistemic Programme in Game Theory: 346 (Synthese Library) book. Happy reading Explaining Games: The Epistemic Programme in Game Theory: 346 (Synthese Library) Bookeveryone. Download file Free Book PDF Explaining Games: The Epistemic Programme in Game Theory: 346 (Synthese Library) at Complete PDF Library. This Book have some digital formats such us :paperbook, ebook, kindle, epub, fb2 and another formats. Here is The CompletePDF Book Library. It's free to register here to get Book file PDF Explaining Games: The Epistemic Programme in Game Theory: 346 (Synthese Library) Pocket Guide.

Gharapurkar, Chandra B. Asthana and Rama B. Alan Lephart, John H. Ojo and O. Woolley and Keith A. Byrd, David Nagel and Andrew D. Inam-ul-Haq, Muhammad Ali and M. Alaghmandan, N. Pehlivann and M. Genco, D. Bortoli, I. Kostadinov, M. Premuda, S. Masieri, G. Giovanelli and F. Zenkova and Inna V. Prasad Raju and R. Aljlil and Fares D. Ghuzlan, Ghazi G. Al-Khateeb and Riyada F. Kolosova and Natalia V. Trias Vilar and M. Herrnson, Claire M. Papadavid, S. Perdikou, N. Neophytou, M.

Hadjimitsis and D. Media Ownership in the U. Helm Jr. Sandrina B. Influence Waning? Leaver and Luke H. Maureen C. Guarino and Esenc M. Undurraga, T. Salih Duhoky, Mobasher S. Benabise and Hermana K. Klein, Robert Bewer, Md. Kamar Ali and Suren N. Neils, D. Mojaba, A. Sulaiman and J. Case of Roman Catholic Academy of St. Bryan, J. Ezz, M. Aly and Rehab M. Kodra and A. Lange, Harald A. Vorobieva and Alla B. Germplasm for Non-burn Seed Production W. Johnston, R. Johnson, C. Golob, K. Dodson, D. Silbernagel and G. Das, W. Clements and G. Maria R. Coady, Ester J.

Fredrica D. Mazumdar and C. Omer Azabagaoglu and G.

Game Theory

Ghangrekar, T. Sajana, L. Mohapatra and A. Anderson, Cameron N. Eddy, Rachel L. Hager, Louis M. McDonald, Jonathan L. Pitchford, Jeffrey Skousen and Walter E. Balloni, Paulo J. Rezende, Andrew S. The Case of Cambusa Project. The Power of Geography. Parametric Study for Performance of R. Obando-Ulloa and Cristian Moreira-Segura. Catching up with or Lagging behind the EU15 Countries?

Terezinha C. A Polyphony of Un Orchestrated Opus. Yost and James Waxmonsky. Gamal A. Characteristics of the Turbulence Processes in the Magnetohydrodynamic Environment. Presidential Race and its Consequences. External Candidates. Stephenson and William Lewis. Public Politics in matters of Crime and Health.

Prevention, Rehabilitation, Readaptation Criminological Analysis. Media Manipulation 2. Where are they now? Regina Heidrich, Marsal A. Mossmann, Anderson Schuh and Emely Jensen. Reflections and Perspectives. The Stable Bounded Theory. The Case of Mexico. Are We "There" Yet? Education and Enlightenment of Jewish Population in Slovakia until An Analysis of the Status of African Women. Andrey P. Socio-cognitive Systems of Organizational Culture and Communication. An investigation into Implicit Cognitive Processes. Juan Francisco Lopez Paz, I. Some Effects of Inclusive Policies. Formal and Non-formal Education on European Issues.

Examples from Romanian Educational System. Solar Energy in Agricultural Systems. Efrosini A. The Potencial of Campomanesia phaea O. Towards a European Defence Union? Military Burden Sharing in the European Union Dietmar P. Jehle, Valentina Fermanelli and Guilia de Santis. Aisha M. Janet S. Mullins, Nicole Peritore and Ann Vail. What Nurse Leaders Need to Know. The Case of "Anis del Mono" Factory. Regenerating Coastal Heritage in Catalonia.

Whatever happened to Italy? Chardon V. Time and Material. The Quality of Architecture. Some Thoughts about how to Evaluate, Understand and Discuss. Cities, Public Space and Citizenship. Gender and Literature Didactics. Challenging the Future. Carlos Ruiz, Tom M. The Distribution of Alternaria sp. David Ayala-Cabrera, Silvia J.

Paratextual Aspect. SMEs and Industry 4. Hypnotic General Anesthesia vs. Lapenta and Enrico Facco. Physiological Feedback and Tolerance of Vicia faba L.

Graduate Biomedical Science Education Needs a New Philosophy

The University of Graz as a Case Study for teaching. An Introduction to Videogame Genre Theory. Understanding Videogame Genre Framework. Possibilities of Using the Traction Transformer in Active d. Traction Substation. Traction Substations. Naipaul and Anita Desai. Ideology of Governance. The Portuguese Football League Case. About the Theological Interpretation of a Philosophical Concept.

Making the Statue Move. Retrospectives and Perspectives. Strata and Topographies. Lifestyle Migrations in Mediterranean Context. Ferdous Azam.

Willem Marinus Dudok in Hilversum. Idea and Method. Black - White Intergroup Relations in France. Why Fix something that is not Broken? Female Micro-enterprises in Rural Central Chile. A Case Study. Local Rural Gastronomic Traditional Tourism. Novel 1,3-Dichalcogenophospholanes with an Annelated 1,2-Dicarba-closo-dodecaborane 12 Unit. Synthesis, Structures and Reactivity. Mendieta Araica, Sandra Lovo Jerez,. TV Broadcasting in Turkey. Micro-Entrepreneurship: Tendency towards Precarious Work? Empirical Findings for Austria. Seeds Treated with Microwave Radiation.

Technology Assessment of Elite Sport. Development of Turf-type Poa pratensis L. It was only from the s on, with the blossoming of the interest in this scientific controversy and the appearance of a younger generation of physicists interested in the subject, that Popper could fulfill his early desire to take part of this controversy. Most of his ideas on the subject are gathered in the volume Quantum Theory and the Schism in Physics.

Less known is that Popper went further in his engagement with the debates over the meaning of the quanta. He could make this through the collaboration with physicists such as Jean-Pierre Vigier and Franco Selleri, who were hard critics of the standard interpretation of quantum physics. From this collaboration emerged a proposal of an experiment to test the validity of some presumptions of quantum theory. Initially conceived as an idealized experiment but eventually led to the lab benches by Yanhua Shih, it spurred a debate which survived Popper himself. Freire Junior, O. Popper, K.

Moreover, I will focus on the early resonance that such a Gedankenexperiment had in the revived debate on quantum foundations. In fact, when he came back to problems of quantum mechanics in the s, Popper strengthened his acquaintances with some illustrious physicists with philosophical interests the likes of D.

Bohm, H. Bondi, W. Yourgrau , but was not engaged in the quantum debate within the community of physicists he did not publish in physics journals or participate in specialised physics conferences. At that time, Popper systematised his critique of the Copenhagen interpretation of quantum mechanics, proposing an alternative interpretation based on the concept of ontologically real probabilities propensities that met the favour of several distinguished physicists among them, D.

Bohm, B. This endeavour led Popper to enter a long-lasting debate within the community of physicists. It ranges from direct contributions, such as suggestions of experiments in quantum mechanics e. Especially his criticism of instrumentalism and his advocacy of realism has been an eye-opener for many. As an illustration a case from the field of neuroscience is discussed in the paper.

It relates to the development of theories about mechanisms underlying the nerve impulse. The central question after the pioneering studies by Hodgkin and Huxley was how the critical ions permeate the nerve cell membrane Hille, Some experimentalists adopted a realistic view and tried to understand the process by constructing mechanistic models, almost in a Cartesian fashion. Others adopted an instrumentalistic, positivistic and allegedly more scientific view and settled for a mathematical black box description.

When it finally was possible to experimentally determine the molecular details, they were found to fit the realistic, mechanistic attempts well, while the instrumentalistic attempts had not led far, thus supporting the Popperian view. The present paper discusses two aspects of his philosophy of mind. One aspect relates to the ontology of mind and the theory of biological evolution. In the theory of evolution Popper found support for his interactionistic view on the mind-brain problem. This, as will be discussed, is a view that for many philosophers is difficult to accept.

His view has renewed the interest in force fields as a model for consciousness and the present paper discusses some recent hypotheses that claim to solve problems that attach to the dominant present-day mind-brain theories. Several authors even identify consciousness with an electromagnetic field Jones, In contrast, Popper proposes that consciousness works via electromagnetic forces. This has been criticized as violating thermodynamic conservation laws. The paper also discusses a related hypothesis that consciousness act on fields of probability amplitudes rather than on electromagnetic fields.

The present paper argues that such models, based on quantum mechanical ideas, often are in conflict with Poppers propensity interpretation of quantum mechanics Popper, Hille, B. Ion channels of excitable membranes 3rd ed. Sunderland, MA: Sinauer Associates. Jones, M. Electromagnetic—field theories of mind. Journal of Consciousness Studies, 20 , Lindahl, B. Mind as a force field: Comments on a new interactionistic hypothesis. Journal of Theoretical Biology, , Quantum Theory and the Schism in Physics.

London: Hutchinson from published by Routledge. A discussion of the mind—brain problem. Theoretical Medicine, 14, Beck F. Journal of Theoretical Biology , They become automatized, due to familiarity and saturated contact. For Shklovsky, however, literature and art in general are able to disturb our common world views. This paper aims to analyze the use of scientific conceptions in science fiction, leading to a new way to look at them. This new glance modifies the trivial connections of current paradigms in science and also in everyday life.

According to the examined authors, defamiliarization and aesthetic experience are responsible for bringing to consciousness things that were automatized, putting a new light over them. That appears also to be the case in science fiction, in which the break of expectations may have consequences not only to the paradigms of the sciences, but to the reflection about the role of science in ordinary life.

In many cases, scientific notions are already made unconsciously accepted in quotidian life, just like everyday assumptions. Besides, science fiction, exaggerating or pushing scientific theories as far as can be imagined, brings about important and profound considerations regarding philosophical questions as well. For instance, in H. In fiction, it is possible for matter and light to behave in a different way as the established paradigm in physics at the end of the 19th century permitted.

It is also interesting to notice that the book was published in a time of crisis in physics, and it seems that Wells absorbed ideas that would change the paradigm in a few years. To claim that science fiction influenced some scientists to initiate scientific revolutions is maybe too large a step to take in this paper.

Nevertheless, it is possible to say that the process of defamiliarization in the reading of science fiction can lead to a new understanding of scientific concepts, inducing reflections that would not be made if the regular context and laws of science were maintained. Students in professional programs or graduate research programs tend to use an excess of vague pronouns in their writing. Reducing vagueness in writing could improve written communication skills, which is a goal of many professional programs. Bertrand Russell argued that all knowledge is vague. This research provides evidence that vagueness in writing is mitigated with instructor feedback.

All participants wrote at a proficient level, as determined by passing scores on a Professional Readiness Examination Test in writing. Faust, D. Russell, B.

Recommended for you

Australasian Journal of Psychology and Philosophy, 1 2 , Science-related museums are special kinds of museums that are concerned with science, technology, the natural world, and other related issues. Today, there are many science-realted museums worldwide operating in different styles, and playing different social roles such as collecting, conserving and exhibiting objects, researching relevant issues and educating the public. Through the different development process of science-related museums in the Western world and in China, we can say that science-related museums are outcomes of the influence of social and cultural conditions such as economy, local culture, policy, humans' views on science, and so on.

The Western world is considered to be the birthplace of science-related museums, where the museums experienced different developments that includes natural history museum, traditional science and technology museum, and science centre. However, museums are imported goods for China. Foreigners and western culture affected the emergence of museums in China, while they are developing rapidly today. We describe a notion of robustness for configurational causal models CCMs, e. Where RAMs relate variables to each other and quantify net effects across varying background conditions, CCMs search for dependencies between values of variables, and return models that satisfy the conditions of an INUS-theory of causation.

A such, CCMs are tools for case-study research: a unit of observation is a single case that exhibits some configuration of values of measured variables. CCMs automate the process of recovering causally interpretable dependencies from data via cross-case comparisons. The basic idea is that causes make a difference to their effects, and causal structures can be uncovered by comparing otherwise homogeneous cases where some putative cause- and effect-factors vary suitably.

CCMs impose strong demands on the analysed data, that are often not met in real-life data. The most important of these is causal homogeneity — unlike RAMs, CCMs require the causal background of the observed cases to be homogeneous, as a sufficient condition for the validity of the results. This assumption is often violated. In addition, data may include random noise, and lack sufficient variation in measured variables.

These deficiencies may prevent CCMs from finding any models at all. Thus, CCM methodologists have developed model-fit parameters that measure how well a model accounts for the observed data, that can be adjusted to find models that explain the data less than perfectly. Lowering model fit requirements increases underdetermination of models by data, making model choice harder. We performed simulations to investigate the effects that lowering model-fit requirements has on the reliability of the results. These reveal that given noisy data, the models with best fit frequently include irrelevant components — a type of overfitting.

In RAMs, overfitting is remedied by robustness testing: roughly, a robust model is insensitive to the influence of particular observations. But this also makes CCMs sensitive to noise. However, a notion of robustness as the concordance of results derived from different models e. Wimsatt , can be implemented in CCMs. We implement the notion of a robust model as one which agrees with many other models of same data, and does not disagree with many other models, in the causal ascriptions it makes. Simulation results demonstrate that this notion can be used as a reliable criterion of model choice given massive underdetermination of models by data.

Baumgartner, M. Identifying complex causal dependencies in configurational data with coincidence analysis. R Journal, 7, 1. Wimsatt, W. Re-engineering philosophy for limited beings. In the last decades, the interest in the notion of emergence has steadily grown in philosophy and science, but no uncontroversial definitions have yet been articulated.

Classical formulations generally focus on two features: irreducibility, and novelty. In the first case, an entity is emergent from another one if the properties of the former cannot be reduced to the properties of the latter. In the second case, a phenomenon is emergent if it exhibits novel properties not had by its component parts. Despite describing significant aspects of emergent processes, both these definitions raise several problems. On the one hand, the widespread habit to identify emergent entities with entities that resist to reduction is nothing more than explaining an ambiguous concept through an equally puzzling notion.

Just like emergence, in fact, reduction is not at all a clear, uncontroversial technical term. On the other hand, a feature such as qualitative novelty can easily appear to be an observer-relative property, rather than an indicator of the ontological structure of the world. In view of the above, to provide a good model of emergence other features should be taken into consideration too, and the ones which I will focus on are discontinuity and robustness.

The declared incompatibility between emergence and reduction reflects the difference between the models of reality underlying them. While reductionism assigns to the structure of reality a mereological and nomological continuity, emergentism leaves room for discontinuity instead. The reductionist universe is composed of a small number of fundamental micro physical entities, and by a huge quantity of combinations of them. In this universe, the nature of the macroscopic entities depends upon that of the microscopic ones, and no physically independent property is admitted.

Accepting the existence of genuine emergence, conversely, implies the claim that the structure of the world is discontinuous both metaphysically and nomologically. Matter is organized in different ways at different scales, and there are phenomena which are consequently scale-relative and have to be studied by different disciplines.

While the laws of physics are still true and valid across many scales, other laws and regularities emerge with the development of new organizational structures whose behavior is often insensitive to microscopic constraints. By robustness, it is intended the ability of a system to preserve its features despite fluctuations and perturbations in its microscopic components and environmental conditions.

Emergent phenomena, therefore, rather than novel, are robust in their insensitivity to the lower level from which they emerge. Emergence, therefore, does not describe atypical processes in nature, nor the way in which we cannot explain reality. It suggests, by contrast, that the structure of the world is intrinsically differentiated, and at each scale and organizational layer correspond peculiar emergent and robust phenomena exhibiting features absent at lower or higher scales.

The devil in the details: Asymptotic reasoning in explanation, reduction, and emergence. Oxford University Press. Bedau, M. Weak Emergence. Philosophical Perspectives, 11, — Cartwright, N. Fundamentalism vs. In Proceedings of the Aristotelian Society Vol. Aristotelian Society, Wiley. Crowther, K. Emergent spacetime according to effective field theory: From top-down and bottom-up. Dennett, D. Real patterns. The Journal of Philosophy, 88 1 , Humphreys, P. A Philosophical Account. NY: Oxford University Press.

Kitano, H. Biological robustness. Nature Reviews Genetics, 5 11 , Laughlin, R. A different universe: Reinventing physics from the bottom down. NY: Basic books. Oppenheim, P. Unity of science as a working hypothesis. Feigh, M. Scriven, and G. Maxwell Eds. Pettit, P. A definition of physicalism. Analysis, 53 4 , Pines, D. Quantum protectorates in the cuprate superconductors. Physica C: Superconductivity — , 59— Silberstein, M. Metascience 21 3 : — Aggregativity: Reductive heuristics for finding emergence. Philosophy of Science 64 4 : — Yates, D. Demystifying Emergence. Ergo, 3 31 , — This paper articulates and defends an egalitarian ontology of levels of being that solves a number of philosophical puzzles and suits the needs of the philosophy of science.

I argue that neither wholes nor their parts are ontologically prior to one another. Neither higher-level properties nor lower-level properties are prior to one another. Neither is more fundamental; neither grounds the other. Instead, whole objects are portions of reality considered in one of two ways. If they are considered with all of their structure at a given time, they are identical to their parts, and their higher-level properties are identical to their lower-level properties. When we do this, whole objects are subtractions of being from their parts—they are invariants under some part addition and subtraction.

The limits to what lower level changes are acceptable are established by the preservation of properties that individuate a given whole. When a change in parts preserves the properties that individuate a whole, the whole survives; when individuative properties are lost by a change in parts, the whole is destroyed. By the same token, higher-level properties are subtractions of being from lower-level properties—they are part of their realizers and are also invariant under some changes in their lower level realizers.

This account solves the puzzle of causal exclusion without making any property redundant. Higher-level properties produce effects, though not as many as their realizers. Lower-level properties also produce effects, though more than the properties they realize. For higher-level properties are parts of their realizers. There is no conflict and no redundancy between them causing the same effect. As long as we focus on the right sorts of effects—effects for which higher-level properties are sufficient causes—to explain effects in terms of higher-level causes is more informative than in terms of lower level ones.

For adding the lower-level details adds nonexplanatory information. In addition, tracking myriad lower level parts and their properties is often practically unfeasible. In many cases, we may not even know what the relevant parts are. Given this egalitarian ontology, traditional reductionism fails because, for most scientific and everyday purposes, there is no identity between higher levels and lower levels. Traditional antireductionism also fails because higher levels are not wholly distinct from lower levels.

Ontological hierarchy is rejected wholesale. Yet each scientific discipline and subdiscipline has a job to do—finding the explanations of phenomena at any given level—and no explanatory job is more important than any other because they are all getting at some objective aspect of reality.

There has been a tremendous development of computerized systems for artificial intelligence in the last thirty years.

Practical Game Theory

Now in some domains the machines get better results than men: --playing chess or even Go, winning over the best champions, --medical diagnosis for example in cancerology --automatic translation, --vision : recognizing faces in one second from millions of photos The successes rely on : --progress in hardware technology, of computational speed and capacity of Big Data.. These developments have led the main actors to talk about a new science, or rather a new techno-science: Machine learning, defined by the fact that it is able to improve its own capacities by itself see [L].

We will discuss carefully these various topics,in particular : Is it a new science or a new techno-science? And finally what are the limits of this numerical invasion of the world? Ma Manin Y. In the recent literature, there has been much discussion about the explainability of ML algorithms. This property of explainability, or lack thereof, is critical not only for scientific contexts, but for the potential use of those algorithms in public affairs. In this presentation, we focus on the explainability of bureaucratic procedures to the general public.

The use of unexplainable black-boxes in administrative decisions would raise fundamental legal and political issues, as the public needs to understand bureaucratic decisions to adapt to them, and possibly exerts its right to contest them. In order to better understand the impact of ML algorithms on this question, we need a finer diagnosis of the problem, and understand what should make them particularly hard to explain. In order to tackle this issue, we turn the tables around and ask: what makes ordinary bureaucratic procedures explainable? A major part of such procedures are decision trees or scoring systems.

We make the conjecture, which we test on several cases studies, that those procedures typically enjoy two remarkable properties. The first is compositionality: the decision is made of a composition of subdecisions. The second is elementarity: the analysis of the decision ends on easily understandable elementary decisions. This allows bureaucratic procedures to grow in size without compromising their explainability to the general public. In the case of ML procedures, we show that the properties of compositionality and elementarity correspond to properties of the segmentation of the data space by the execution of the algorithm.

Compositionality corresponds to the existence of well-defined segmentations, and elementarity corresponds to the definition of those segmentations by explicit, simple variables. But ML algorithms can loose either of those properties. Such is the case of opaque ML, as illustrated by deep learning neural networks, where both properties are actually lost. This entails an enhanced dependance of a given decision to the procedure as a whole, compromising explainability by extracts. If ML algorithms are to be used in bureaucratic decisions, it becomes necesary to find out if the properties of compositionality and elementarity can be recovered, or if the current opacity of some ML procedures is due to a fundamental scientific limitation.

This paper embeds the concern for algorithmic transparency in artificial intelligence within the history of technology and ethics. The value of transparency in AI, according to this history, is not unique to AI. Rather, black box AI is just the latest development in the year history of industrial and post-industrial technology that narrows the scope of practical reason. Studying these historical precedents provides guidance as to the possible directions of AI technology, towards either the narrowing or the expansion of practical reason, and the social consequences to be expected from each.

The paper first establishes the connection between technology and practical reason, and the concern among philosophers of ethics and politics about the impact of technology in the ethical and political realms. The first generation of such philosophers, influenced by Weber and Heidegger, traced the connection between changes in means of production and the use of practical reason for ethical and political reasoning, and advocated in turn a protection of practical reasoning — of phronesis — from the instrumental and technical rationality valued most by modern production.

More recently, philosophers within the postphenomenological tradition have identified techne within phronesis as its initial step of formation, and thus call for a more empirical investigation of particular technologies and their enablement or hindering of phronetic reasoning. This sets the stage for a subsequent empirical look at the history of industrial technology from the perspective of technology as an enabler or hindrance to the use of practical reasoning and judgment. This critical approach to the history of technology reveals numerous precedents of significant relevance to AI that from a conventional approach to the history of technology focusing on technical description appear to be very different from AI — such as the division of labor, assembly lines, power machine tools and computer-aided machinery.

In particular, this section looks like the use of statistics in industrial production, as it is the site of a nearly century-long tension between approaches explicitly designed to narrow or expand the judgment of workers. Finally, the paper extends this history to contemporary AI — where statistics is the product, rather than a control on the means of production — and presents the debate on explainable AI as an extension of this history. This final section explores the arguments for and against transparency in AI. Equipped with the guidance of years of precedents, the possible paths forward for AI are much clearer, as are the effects of each path for ethics and political reasoning more broadly.

Truthlikeness is a property of a theory or a proposition that represents its closeness to the truth of some matter. In the similarity approach, roughly, the truthlikeness of a theory or a proposition is defined according to its distance from the truth measured by an appropriate similarity metric. We will expose a counterexample to this definition presented by Thom , Weston and Liu and a modification of it that we think is much more clear and intuitive.

The first parameter is correctly measure by the Minkowski metric. The second parameter can be measure by the difference of the derivatives. Therefore for some interval n, m :. Once defined in this way we can represent all possible laws regarding some phenomenon in a two dimensional space and extract some interesting insights.

The point 0, 0 will correspond to the truth in question and each point will correspond to a possible law with a different degree of accuracy and nomicity. We can define level lines sets of theories equally truthlike and represent scientific progress as the move from a determinate level line to another closer to 0, 0 , where scientific progress may be performed by a gain of accuracy and nomicity but in different degrees.

We can define some values "a" of accuracy and "n" of nomicity under which we can consider laws to be truthlike in an absolute sense. We will see how can we rationally estimate this values according to the scientific practice. We will estimate the degrees of truthlikeness of four laws Ideal gas model, Van der Waals model, Beattie—Bridgeman model and Benedict—Webb—Rubin model regarding Nitrogen in its gas state.

We will argue that. Truthlikeness for Multidimensional, Quantitative Cognitive Problems. Journal of Philosophical Logic, 25 2. Kuipers, T. Liu, C. Synthese Niiniluoto, I. Thom, R. Reading, MA: Addison-Wesley. Weston, T. Some recent literature Hicks and Elswyk ; Bhogal has argued that the non-Humean conceptions of laws of nature have a same weakness as the Humean conceptions of laws of nature. Precisely, both conceptions face a problem of explanatory circularity: Humean and non-Humean conceptions of laws of nature agree that the law statements are universal generalisations; thus, both conceptions are vulnerable to an explanatory circularity problem between the laws of nature and their instances.

A first circularity is a full explanatory circularity, hereafter the problem of circularity C. Synthetically, a law of nature is inferred from an observed phenomenon and, thereafter, it is used to explain that same observed phenomena. Thus, an observed phenomenon explains itself. The other circularity is a problem of self-explanation, hereafter the problem of circularity SE. The problem of circularity SE is a sub-problem of the problem of circularity C.

A law of nature explains an observed phenomenon, but the law includes that same phenomenon in its content. P1 The natural laws are generalizations. P3 The natural laws explain their instances. In this presentation, I will discuss the premises of the above arguments. At the end, I will analyse a semantic circular condition for unsuccessful explanations, recently proposed by Shumener , regarding this discussion.

Armstrong, David. What Is a Law of Nature? Cambridge: Cambridge University Press. Bhogal, Harjit. Australasian Journal of Philosophy 95 3 : — Hicks, Michael Townsen, and Peter van Elswyk. Philosophical Studies 2 : — Shumener, Erica. The British Journal for the Philosophy of Science. In understanding the essentially rhetorical character of science, special attention should be paid to the place of figurativeness in research discourse. The central question of my presentation is whether it is possible in the absence of figurativeness to produce radically different meanings that will transform the conceptual space of science.

In most cases, the role of figurativeness is reduced to the optimisation of knowledge transmission. One of the rhetoric elements most widely used in research discourse, the metaphor often becomes an irreplaceable vehicle, for it makes it possible to create an idea of the object, i. The use of figurative elements in scientific language translates both in communicative optimisation and, owing to the uniqueness of the interpretation process, in the discovery of new ways to understand the object.

However, the role of figurativeness in research discourse is not limited to knowledge transmission. Despite the significance of communication i. Thus, in considering the role of figurativeness in scientific discourse, the focus should be shifted from the concept of communication to that of articulation, in other words, from another to the self. In this presentation, I will put forward arguments in support of this answer. To build and develop a theoretical model, the mere abstraction of the object is not sufficient. Always beyond the realm of convenience, figurativeness by default transcends the existing conceptual terrain.

Sometimes, it refutes any objective, rationalised convenience. It even seems to be aimed against conveniences. It means an upsurge in subjectivity, which, in the best case, destroys the common sense of things that is embedded in the forms of communicative rationality. Bacon, F. The Advancement of Learning. Clarendon Press, 2. Husserl, E. Cartesian Meditations: An Introduction to Phenomenology.

Under modern conditions the influence of electronic media on social construction of historical memory is huge. Historical information is transferred to a digital format, not only archives and libraries accumulate the knowledge of the Past, but also electronic storages of databases. Written memory gives way to electronic memory, and development of Internet technologies provides access of a massive number of users to it.

Today the ideaof the Past is formed not only by the efforts of professional historians, but also Internet users. The set of individual images of history creates collective memory. Modern society is going through the memory boom which is connected with the ability of users to make knowledge of the Past and to transmit it through new media. Thus, the memory from personal and cultural space moves to the sphere of public media.

This process is about the emergence of media memory. The research of influence of media on individual and collective memory is based on M. McLuhan's works. Studying of social memory is carried out within M. The analysis of ideas of the Past is based on the methods of historical epistemology presented in H. White and A. Megill's works. A small number of studies is devoted to the influence of media on social memory. Freeman, B. Nyenas and R. Daniel The mediatization of society hasproduceda special mechanism of storage, conversion and transmission of information which changed the nature of production of historical knowledge and practice of oblivion.

Also, the periods of storage of social information have changed. According to theabove mentioned the author defines media memory as the digital system of storage, transformation, production and dissemination of information about the Past. Historical memory of individuals and communities is formedon the basis of media memory.

Media memory can be considered as the virtual social mechanism of storing and oblivion, it has an opportunity to provide various forms of representation of history in daily occurrence space, to expand practice of representation of the Past and a commemoration and also to increase quantity creating and consuming memorial content.

Standing on the position of historical epistemology we can observe the emergence of new ways of the cognition of the Past. Media memory selects historical knowledge, including relevant information about the Past in the agenda, and subjecting to oblivion the Past with no social need. Also, there is segmentation of historical knowledge between various elements of the media sphere.

It is embodied in a variety of historical Internet resources available to users belonging to different target audiences. Media memory is democratic. It is created on the basis of free expression of thoughts and feelings by available language means. Photos and documentary evidence play equally important roles in the formation of ideas of the Past alongside with subjective perception of reality and estimating statements. Attempts to hide any historical information or withdraw it from public access lead to its greater distribution. Media memory as a form of collective memory is set within the concept of the post-truth when the personal history and personal experience of reality replace objective dataforaparticular person.

The knowledge of history gains new meanings, methods and forms, this in its turn makes researchers look for new approaches within historical epistemology. The communicative dimension of epistemological discourse is connected with the research of how communication forms influence the production of knowledge. The modern communication revolution is determined by a new social role of Internet technologies, which mediate the social communication of different level and open mass access to any kinds of communication.

Development of Internet services of social networks gives users more and more perfect instruments of communication management. These tools give individuals the possibility to develop their own networks of any configuration despite the minimum information about partners and to distribute knowledge out of the traditional institutional schemes of the Modern. Distribution of social networks has cognitive effect because it ensures the mass users inclusion in the production of informal knowledge. The author believes that Internet content is a specific form of ordinary knowledge, including special discursive rules of production of knowledge, as well as the system of its verification and legitimation.

The research media influence on cognitive structures of communication is based on M. McLuhan's ideas; the analysis of network modes of production of knowledge is based on M. Granovetter and M. Castells's network approach; the cognitive status of Internet content is proved by means of the concept of ordinary knowledge of M. Bianca and P. The author's arguments are based on the communication approach which brings closer the categories of social action, the communicative act and the act of cognition. Ordinary knowledge in epistemology is quite a marginal problem.

A rather small amount of research is devoted to its development. One of the key works in this sphere is the collective monograph "Epistemology of Ordinary Knowledge", edited by M. Piccari In this work M.

Boudewijn de Bruin - Publications

Bianca proves the concept according to which ordinary knowledge is a form of knowledge which not only allows to get epistemic access to the world, but also includes development of the models of the world which possess different degree of reliability. The feature of this form is that ordinary knowledge can be reliable and relevant though it has no reliability of scientific knowledge. The question of how the media sphere changes the formation of ordinary knowledge, remains poorly studied.

In the beginning the technical principles of operating content determine the epistemic processes connected with complication of the structure of the message. The environment of ordinary knowledge formation is the thinking and the oral speech. Usage of the text causes splitting of initial syncretism of ordinary knowledge and increasing the degree of its reflexivity and its subordination to genre norms literary, documentary, journalistic , i. Usage of basic elements of a media text graphic, audio- and visual inserts strengthens genre eclecticism and expands possibilities of the user self-expression, subjectcentricity and subjectivity of the message.

The dominance of subjective elements in advancement of media content is fixed by the neologism "post-truth". The author defines post-truth as the independent concept of media discourse possessing negative connotations and emphasizing influence of interpretations in comparison to factography. The communicative entity of post-truth comes down to the effect of belief as the personal and emotional relation to the subject of the message.

The post-truth combines global with private, personalizes macro-events and facilitates the formation of their assessment for the recipient. The post-truth as transmission of subjectivity is based on representation of personal subjective experience of world cognition, i. The post-truth does not mean direct oblivion and depreciation of the truth. The emotionally charged attitude acts as the filter for the streams of diverse content in conditions of the information overload. Through the post-truth people also cognize, and, at the same time, express themselves, create identities and enter collective actions.

Communicative epistemology as the methodological project offers new prospects in the research of social networks production of knowledge. According to the author, social networks as the special channel transform ordinary knowledge to informal one. Nevertheless, the clarification of peer disagreement under multiple guidelines may require further methodological development to improve cognitive grasp, given the great magnitude of data and information in them, as in the case of multi-expert decision-making Garbayo, , Garbayo et al.

In order to fill this methodological gap, we propose an innovative computational epistemology of disagreement platform for the study of epistemic peer evaluations of medical guidelines. To that effect, we suggest to measure the conceptual distances between guidelines terms in their scientific domains with natural language processing tools and topological analysis to add modeling precision to the characterization of epistemic peer disagreement in its specificity, while contrasting simultaneously multiple guidelines.

The main epistemic hypothesis in this study is that medical guidelines disagreement of breast cancer screening, when translated into conflicting epistemic peers positions, may represent a Galilean idealization type of model of disagreement that discounts relevant peer characterization aspects thereof, which a semantic treatment of contradictions and disagreement may further help to clarify Zadrozny, Hamatialam, Garbayo, A new near-peer epistemic agency classification in reference to the medical sub-areas involved may be required as a result, to better explain some disagreements in different fields such as oncology, gynecology, mastology, and family medicine.

The Epistemology of Disagreement: New Essays. Garbayo, L. Constraint programming and decision making, , Springer, Henderson, J. Hamalatian, H. In Proc. Florida Artificial Intelligence Res. Poster Abstracts. Zadrozny, W; Garbayo, L. Preliminary Report and Discussion. The Infinite Gods paradox is introduced by Benardete in the context of his metaphysical problems of the infinite. Priest starts the discussion with the publication of a logical analysis and then follows the argument by Yablo in which he defends that the paradox contains a logical impossibility.

Contextualised in this discussion, my communication is based on the introduction of a proposal for a representation of the Infinite Gods paradox in the strict context of Classical Mechanics. The objective of following such a methodology consists in deepening in the understanding of the paradox and clarifying the type of problem that underlies it using the analytical power of Classical Mechanics.

Nevertheless, no strictly mechanical representation of the Infinite Gods paradox has been published yet. But in clear contrast to his contention, this is not a big metaphysical surprise but a simple and direct consequence of causal postulates implicit in Classical Mechanics. Finally, it also leads to conclude that the problem that underlies the paradox is not logical but causal, and thus, is in clear opposition to the reasoning defended by Yablo Consequently, next objective consists in explaining the diagnosis of what I consider is erroneous in this last argument.

In addition to the achievement of the main objective consisting in deepening in the understanding of the paradox and clarifying the type of problem that underlies it, the analysis of the problem of evolution via my mechanical representation possibilitates clarification on the type of interaction in it. This in itself is a conceptually interesting result in the theoretical context of Classical Mechanics. Modern science and Zeno paradoxes. Middleton: Wesleyan University Press.

Tasks, subtasks and the modern Eleatics. Pataut ed.


  • Gardener’s Guide to April Wildflowers: A Field Guide For Early Spring Wildflower Identification (A Year of Indiana Wildflowers Book 1).
  • Kids in the Middle: The Micro Politics of Special Education!
  • In the Cockpit with Harrison Ford (Passion for Flight Book 11)?
  • Roasted and Toasted: The Mayvis Chronicles 1!

Cham, Switzerland: Springer. According to the Encyclopedia on the Rights and Welfare of Animals, Anthropocentrism relates to any idea which suggests central importance, superiority and supremacy of man in relation to the rest of the world. Anthropocentrism denotes also that the purpose of Nature is to serve human needs and desires, based on the idea that man has the highest value in the world.

Fox, Even if anthropocentrism can be seen as a concept fitting mainly in the field of Environmental Ethics, we could say that it can be considered as a concept connected also with Science, as being a part of the scientific outlook to the world. Even if we claim that the scientific outlook is objective and not subjective, provided that this parameter is controllable, are we at the same time in the position to assert that our view of the world is free of anthropocentrism? The branches of science which are more vulnerable to such a viewpoint, as their name may indicate, are the Humanities as they focus on man and the achievements of human culture.

Such an approach is not expected by the so-called positive sciences. Nevertheless, the anthropocentric outlook is not avoided entirely. An example of this in Cosmology is the noted Anthropic Principle. The main idea of the Anthropic principle, as we know it, is that the Universe seems to be "fine-tuned" in such a way, in order to allow the existence of intelligent life that can observe it.

In my presentation, I will attempt to present briefly the anthropic principle and to answer the questions mentioned above. In addition, I will try to show how anthropocentrism contradicts with the human effort to discover the world. Also I will refer to the consequences of Anthropocentrism for Ethics and Science itself.

New York: Routledge. Philosophical Transactions of the Royal Society of London. Fox, M. Santa Barbara, California: Greenwood Press. There are several camps in the recent debates on the nature of scientific understanding. There are factivists and quasi-factivists who argue that scientific representations provide understanding insofar as they capture some important aspects of the objects they represent. Representations, the quasi- factivists say, yield understanding only if they are at least partially or approximately true.

The factivist position has been opposed by the non-factivists who insist that greatly inaccurate representations can provide understanding given that these representations are effective or exemplify the features of interest. Both camps face some serious challenges. The factivists need to say more about how exactly partially or approximately true representations, as well as nonpropositional representations, provide understanding. The non-factivists are expected to put more effort into the demonstration of the alleged independence of effectiveness and exemplification from the factivity condition.

The aim of the proposed symposium is to discuss in detail some of these challenges and to ultimately defend the factivist camp. This paper argues against the opposition between effectiveness and veridicality. Building on some cases of non-explanatory understanding, the author shows that effectiveness and veridicality are compatible and that we need both. The central claim of this paper is that such models bring understanding if they capture correctly the causal relationships between the entities, which these models represent. The author argues that such explanations bring partial understanding insofar as they allow for an inferential transfer of information from the explanans to the explanandum.

What happens, however, when understanding is provided by explanations which do not refer to any causal facts? One of the characteristics of the debate around factivity of understanding is its focus on explanatory sort of understanding. The non-explanatory kind was barely considered. The proposed contribution tries to take some steps in this direction and to suggest this way some possible points of an investigation.

The inquiry will look at the routes of realization of factivity in situations that were marked in the literature to instantiate non-explanatory understanding. The main quest will look at the differences between the issues raised by factivity in explanatory cases and non-explanatory ones.

One focus will be on the modality historical arguments and the ones from idealizations raised against supporting the non-factivity claim get contextualized in the non-explanatory cases of understanding. As some of the non-explanatory means do not involve propositional content the factivity issue has to be reassessed.

2010 – today

I will therefore reject the pure reductvist view that non-explanatory forms are just preliminary incomplete forms of explanatory understanding i. In the last part I will turn to a second point by reference to the previous discussion. The effectiveness condition was advanced by de Regt as an alternative to the veridicality condition. I will support a mixed view which states the need of including reference to both conditions. The cases of non-explanatory understanding, might better illuminate the way the two components are needed in combination.

Moreover, in some non-explanatory cases one of the above conditions might take precedence over the other, as for example along the separation of the ones with propositional content possible explanation, thought experiments and the other of a non-propositional nature e. Factive scientific understanding is the thesis that scientific theories and models provide understanding insofar as they are based on facts. Because science heavily relies on various simplifications, it has been argued that the facticity condition is too strong and should be abandoned Elgin , Potochnik In this paper I present a general model of a metabolic pathway regulation by feedback inhibition to argue that even highly simplified models that contain various distortions can provide factive understanding.

However, there is a number of issues that need to be addressed first. For instance, the core of the disagreement over the facticity condition for understanding revolves around the notion of idealization. Here, I show that the widely used distinction between idealizations and abstractions faces difficulties when applied to the model of a metabolic pathway regulation. Some of the key assumptions involved in the model concern the type of inhibition and the role of concentrations.

Usually, it is the idealizations that are considered problematic for the factivist position because idealizations are thought to introduce distortions into the model, something abstractions do not do. However, I show that here abstractions distort key difference-makers i. This seemingly further supports the nonfactivist view, since if abstractions may involve distortions then not only idealized models but abstract models as well cannot provide factive understanding.

I argue that this is not the case here. The diagrammatic model of a metabolic pathway regulation does provide factive understanding insofar as it captures the causal organization of an actual pathway, notwithstanding the distortions. I further motivate my view by drawing an analogy with the way in which Bokulich presents an alternative view of the notions of how-possibly and how-actually models. The conclusion is that, at least in some instances, highly simplified models which contain key distortions can nevertheless provide factive understanding, provided we correctly specify the locus of truth.

Bokulich, A. Love, A. Dilworth ed. Potochnik, A. The view that scientific representations bear understanding insofar as they capture certain aspects of the objects being represented has been recently attacked by authors claiming that factivity veridicality is neither necessary nor sufficient for understanding. Instead of being true, partially true, or true enough, these authors say, the representations that provide understanding should be effective, i.

If we take this inferential aspect of understanding seriously, we should be ready to address the question what makes the conclusions of the alleged inferences correct. It seems as if there is no alternative to the view that any kind of inference could reliably lead to correct, i. Indeed, it can be shown that the examples, which the critics of the factivity of understanding have chosen as demonstrations of non-factive understanding could be successfully analyzed in terms of true enough premises endorsing correct conclusions.

To sum up, the non-factivists have done a good job by stressing the inferential aspects of understanding. However, it should be recognized that there is no way to make reliably correct predictions and non-trivial inferences, if the latter are not based on true, partially true, or true enough premises. References De Regt, H. How false theories can yield genuine understanding. In: Grimm, S. Explaining Understanding. New Perspectives from Epistemology and Philosophy of Science. New York: Routledge, 50—