Lorentz Center - Philosophy of the Information and Computing Sciences from 8 Feb 2010 through 12 Feb 2010
  Current Workshop  |   Overview   Back  |   Home   |   Search   |     

    Philosophy of the Information and Computing Sciences
    from 8 Feb 2010 through 12 Feb 2010

 

Abstracts
(Alphabetic by Speaker)

Mark Bishop (London)
Stochastic Diffusion Processes: self-organisation, search and intelligence

In 1962 Rosenblatt described connectionist approaches to  Artificial Intelligence (AI) primarily in terms of pattern matching tasks [1]. Fourteen years later Newell and Simon firmly defined symbolic AI in  terms of symbol manipulation and search [2]. And at the heart of both  strands of AI is tacit acceptance of a fundamental computational metaphor  in cognition. Since few explicitly argue for this computational  metaphor, it seems that most AI workers must implicitly assume  computations lie at the heart of cognition.

 This talk will examine knowledge representation and processing issues in  both of the above cognitive architectures and briefly review criticism of  their underlying Computationalism, before outlining a new connectionist  metaphor grounded upon stochastic diffusion in place of computations  [3]. As an example of this metaphor I will describe how a simple  network of stochastically communicating agents can both dynamically  represent hi-arity knowledge and self-organise to perform globally
 optimal search.

I conclude the presentation with a practical discussion which illustrates  ways of using this methodology to find the best Indian Restaurant in  town...

REFERENCES:
[1] Rosenblatt, F., (1962), Strategic Approaches to the Study of Brain  Models, in von Foerster, H., (ed), Principles of Self-Organisation,  Pergamo Press.
[2] Newell, A., & Simon, H.A., (1976), Computer Science as Empirical
Enquiry: symbols and search, Communications of the ACM 19:3 (1976)  113-126.

[3] Nasuto, S.J., Bishop, J.M. & De Meyer (2009), Communicating neurons: a connectionist spiking neuron implementation of stochastic diffusion  search, Neurocomputing 72, pp. 704-712.
---------------------------------------------------------------------------------------------------------------------

 

Philip Brey (Enschede, member scientific committee)
Well-Being Values and Virtues in Information Technology

In this talk, I will argue that the design features of information technology products have a significant influence on both the  well-being of users and the development of their virtues, by which I  mean good qualities in persons that are reflected in their behavior  or demeanor.  Since information technology plays a pervasive role in  modern life, it is important to understand this influence and to  anticipate how it can be taken into account in design. The analysis I  will present will be based on the disclosive computer ethics approach  that I have previously developed as well as on value-sensitive design  approaches (Friedman, Nissenbaum and others).  Using these  approaches, I will show how various virtues and aspects of well-being  are influenced by the design features of different IT products andservices, and how these factors can be better accounted for in  design.  My focus will be on everyday uses of PCs, the Internet,  (smart) mobile phones and PDAs.

This paper is part of a six-year project on New Media and the Quality  of Life which I lead, which is a Vici project funded by the  Netherlands Organization for Scientific Research.  It involves three  postdocs and two PhD students and takes place from 2006 to 2012.
---------------------------------------------------------------------------------------------------------------------

 

S. Barry Cooper (Leeds)
Causality in a Computability-Theoretic Context

We examine the computability-theoretic content of a range of real-world environments. The examples considered - including those  from physics, biology and the theory of mind - all entail long-standing problems in characterising causal relations. The  intended aim is to enhance our appreciation of the mathematics underlying causality in general, and to confirm and qualify basic intuitions about its computational infrastructure in particular cases.
---------------------------------------------------------------------------------------------------------------------

 

Gilles Dowek (Palaiseau)
From axioms to algorithms

 

Several properties of pure predicate logic, such as the consistency the subformula property, the disjunction and the witness property for constructive proofs, the completeness of various proof-search methods, ... do not generalize to arbitrary theories, defined as sets of axioms.  This may be the sign that this definition of the notion of theory is flawed, because it is too general.

We present, in this talk, an alternative definition of the notion of theory where the axioms are replaced by an algorithm. This new answer to the question ``What is a theory?'' permits to extend the properties above to many theories, including Arithmetic, Simple type theory and some versions of Set theory.  It also leads to a new understanding of the articulation between computation and deduction in proofs.

This view where computation and deduction are not opposed but articulated yields a global view on the history of mathematics, where the pre-axiomatic mathematics of ancient Antiquity and the axiomatic mathematics beginning with the Greeks are not opposed but articulated.
---------------------------------------------------------------------------------------------------------------------

 

Charles Ess (Aarhus)
Networked Communications and Social Media: New Identities, New  Societies, New Religions?

 

Drawing on contemporary communication theory and recent research from  the domain of computer-mediated communication, I first show that the  emergence of networked communication media (including the Internet, the World Wide Web) as well as more recent, “Web 2.0” applications  (social media, micro-blogging, and “prod/user”-created content distributed via sites such as YouTube, etc.) correlates with documented shifts in conceptions of privacy in both “Western” and  “Eastern” cultures.  These shifts converge towards a kind of “group  privacy” that I suggest correlates with what may be called a  relational, networked, or “smeared-out” self.  In the Western world, however, this shift represents a striking transformation from the modern conception of the individual autonomous self - and its correlative requirement for individual privacy - that is foundational both to modern Protestantism and liberal democracies.

A key question is whether this shift from the modern self, as affiliated with the communication modalities of print and literacy, towards an identity affiliated with what Walter Ong characterizes as “secondary orality” will issue in a new sort of identity that represents a hybrid of the individual and smeared-out selves (as Ong predicted) - or in a new identity that erases and replaces the modern individual, with obvious consequences for both religious and political institutions and organizations. Using both historical and more recent research (specifically, on uses of social media in contemporary religious institutions) and theory (including  Zsuzsanna Kondor’s notion of “secondary literacy” as an alternative to “secondary orality”), I will suggest that we’re seeing more movement in the direction of a hybrid identity rather than a new identity that will extinguish the modern one. In addition to the promise of this hybrid identity in terms of preserving at least some elements of the modern self and liberal state,  such a turn further bodes well for a pluralistic but global information ethics - i.e., one that conjoins shared norms and practices with diverse interpretations and applications that preserve the irreducible differences defining diverse cultures and traditions. More broadly, as this turn includes a (re)turn in the West to the body and embodiment as inextricable elements of human identity, I will try to show how this shift will likely bring in its train a focus on virtue ethics - an ethics that is especially suited, again, to the development of a pluralistic but global ethics more broadly.
---------------------------------------------------------------------------------------------------------------------

 

Public Lecture Luciano Floridi (Hertfordshire-Oxford)
Bodies of Information -  e-Health and its Philosophical implications
(see separate announcement)

---------------------------------------------------------------------------------------------------------------------

 

Luciano Floridi (Hertfordshire-Oxford, member scientific committee)
The constructionist philosophy of computer science

 

In this talk, I shall begin by giving a broad account of what I mean by constructionism, with some references to the philosophical tradition that has inspired it, the so-called “maker’s knowledge” tradition. I shall then discuss the approach in some details, sketching how it works in computer science. The main thesis will be that constructionism is a form of knowledge engineering. I shall then argue in favour of a constructionist approach to knowledge. To put it simply, an epistemic agent knows something only if that agent is able to build (reproduce, simulate, construct etc.) that something and embed it in the correct network of relations that account for it. Alternatively, an agent qualifies as an epistemic agent not when she is a passive user but when she is a critical producer of knowledge. The maker’s knowledge is knowledge of the ontology of the conceptual artefact and this is a fundamental epistemological lesson we can learn from computer science. In the conclusion, I shall illustrate the
general thesis of the talk by showing how the Turing Test can be correctly read as an application of the constructionist method.
---------------------------------------------------------------------------------------------------------------------

 

Frances Grodzinsky (Sacred Heart University)
Developing Artificial Agents Worthy of Trust: Using an object-oriented model

 

There is a growing literature on the concept of e-trust, and on the feasibility and advisability of “trusting” artificial agents. In this paper we present an object-oriented model for thinking about trust in both face-to-face and digitally mediated environments.  We review important recent contributions to this literature regarding e-trust in conjunction with our model. We identify three important types of trust interactions and examine trust from the perspective of a software developer. Too often, the primary focus of research in this area has been on the artificial agents and the humans they may encounter after they are deployed. We contend that the humans who design, implement and deploy the artificial agents are crucial to any discussion of e-trust and to understanding the distinctions among the concepts of trust, e-trust and face-to-face trust.

 

This presentation will reflect work done by Frances Grodzinsky,Keith Miller and Marty Wolf in the area of e-trust and artificial agents.
---------------------------------------------------------------------------------------------------------------------

 

Klaus Mainzer (Muenchen)
Information  and Computation of Complex Dynamical Systems - Epistemic and System-Theoretical Perspectives of Information and Computing Sciences

 

(1) What are complex dynamical systems? (Basic concepts: e.g., nonlinearity, attractors, order parameters).

(2) Complex dynamical systems in nature (e.g., physics, chemistry, biology, brain research).
(3) Complex computational models of nature (e.g., cellular automata, neural networks, cognitive robots)
(4) Complex information and communication networks (e.g., WWW, scale-free networks, power laws)
(5) Limits of Information and computation complexity? (e.g., degrees of dynamical, information, and computational complexity)

REFERENCES:
[1] K. Mainzer, Thinking in Complexity. The Computational Dynamics of Matter, Mind, and Mankind, 5th revised and enlarged edition, Springer, Berlin, 2007.

[2] K. Mainzer, Symmetry and Complexity. The Spirit and Beauty of Nonlinear Science, World Scientific: Singapore 2005.
[3] K. Mainzer, Komplexitaet, UTB-Profile, Stuttgart, 2008.
---------------------------------------------------------------------------------------------------------------------

 

John-Jules Meyer (Utrecht, member scientific committee)

Philosophy of agent technology


Agent technology is a paradigm within the area of artificial intelligence that has emerged over the last two decades. Central to the paradigm is the notion of n intelligent agent, which is a hardware or software entity that displays a certain degree of autonomy in deciding what to do on the basis of its mental state, typically expressed in terms of beliefs, desires/goals and intentions/plans. At the moment agent research mainly concentrates on the realisation and use of multi-agent systems, in which various agents are coordinating and cooperating to perform a certain task.

Interestingly the paradigm of agents has deep philosophical roots, going back to Aristotle's theory of practical syllogism. But also more modern philosophical ideas have contributed to coin the notion of an intelligent agent, such as Dennett's intentional stance, Bratman's theory of intention, and ideas from ethics and the philosophy of law, such as deontic logic and normative reasoning. I will show that these ingredients are still recognizable in current implementations of agents, in particular through agent-oriented programming languages and so-called normative multi-agent systems.
---------------------------------------------------------------------------------------------------------------------

 

James Moor (Dartmouth)
Ethical Issues in AI

AI raises a number of ethical issues that go well beyond those of  ordinary computer science. What, if any, is the difference between AI ethics and computer ethics? This presentation will attempt to circumscribe the area of AI ethics and examine what is similar and what is different between these fields. Special attention will be given to the issue of value selection concerning cyborgs and robots and showing how this has important consequences for our understanding of ethics. This will take us to the heart of ethical theory and the issue of whether human values should automatically trump all others.
---------------------------------------------------------------------------------------------------------------------

 

Branislav Rovan (Bratislava, member scientific committee)
Rethinking information - Small steps towards a big agenda
The formalisation of the notion of information we use was introduced over half a century ago by Shannon in order to study problems in transmitting information. More adequate formalisation is needed to capture and study new ways of using and processing information. We shall mention some attempts in this direction that are beginning to emerge. We shall briefly discuss several attributes of information and concentrate on a particular formalisation for measuring usefulness of information.

---------------------------------------------------------------------------------------------------------------------

 

Steve Russ (Warwick)
Human computing
In their book `Understanding Computers and Cognition' (1986) Winograd and Flores suggested that the traditional ‘tool use’ of computers with its emphasis on logic and language, had been unduly influenced by the ‘rationalistic’, or analytic, traditions in philosophy. They proposed an approach which gave more emphasis to the role of human interpretation. Reflection on the practice of computing in recent decades, and the technical possibilities for visual modelling with immediate feedback, have motivated our approach in Empirical Modelling at Warwick to take the boundary even further back to include the personal and provisional and to regard computer modelling as a supplementary source of direct experience which is ‘of-a-piece’ with other sources of experience.

 

Such a broadening of the boundary of computing has a reciprocal effect on theory and philosophy. To give an account of personal computing in a more profound sense than hitherto we have developed a re-conceptualisation of computing in terms of observation and exploratory experiment, dependency and agency. The principles and tools associated with such new theory also offer – to some extent – an integration of computer processes with human processes. It is this integration to which we refer by ‘human computing’. Extending our sources of experience in this way leads to a kind of computing that is constructivist in spirit and has an impact (as yet largely unexplored) on our understanding of knowledge.

 

For a substantial body of publications, teaching materials and practical tools and models see: http://www.dcs.warwick.ac.uk/modelling/

The emphasis on experience and its relationship to knowledge, as well as insights and inspiration from William James, are elaborated in the paper `Radical Empiricism, Empirical Modelling and the Nature of Knowing'. The idea of ‘human computing’ is developed with several examples in the paper `Human Computing: Modelling with Meaning'. The idea of these two phases of development from the exploratory stage of the initial analysis of a domain through to the ‘post-theory’ implementation of a provisional model is illustrated and expanded upon with some philosophical commentary in the paper `Experimenting with Computing'.

REFERENCES:
[1] www2.warwick.ac.uk/fac/sci/dcs/research/em/publications/papers/078/
[2] www2.warwick.ac.uk/fac/sci/dcs/research/em/publications/papers/082/
[3] www2.warwick.ac.uk/fac/sci/dcs/research/em/publications/papers/098/
---------------------------------------------------------------------------------------------------------------------

 

Viola Schiaffonati (Milano)

Computer science and experiments: a methodological and epistemological  perspective for an ontological problem
The concept of experiment lies at the core of modern science and  good experimental methodologies constitute the basis of current empirical disciplines. Within the debate about the nature of computer science as a science, the analysis of the role played by  experiments is essential, even if it has not been yet systematically afforded.

The aim of my talk is to analyze the role of experiments for computer science, by presenting in particular two perspectives under which they can be intended. The first one is a methodological perspective regarding the way in which experiments are used in some areas of computer science, and I will discuss some examples taken from the field of mobile robotics. The second one is an epistemological perspective regarding the way in which computer science tools can be legitimately employed  to make experiments in other scientific disciplines (i.e., physics, biology, chemistry), and I will discuss some examples of computer simulations used as experiments.

I deem that this can be an interesting perspective to try to answer whether computer science is a natural science, an engineering science or a science of the artificial, while at the same time a way of assessing the working methodologies of the philosophy of computer science and its relationship with the philosophy of science.
---------------------------------------------------------------------------------------------------------------------

 

Jeremy Seligman (Auckland)
Information - Do we know what we are talking about?
Is talk of information content, information processing and information flow merely suggestive rhetoric, loose talk to make our ideas about meaning and knowledge sound contemporary? Or is there an underlying conceptual framework, grounded on a century of theoretical and practical advances in logic and computer science? And if so, what are the prospects of articulating such a framework in a mathematically precise way? Of the many uses of the term `information' to describe aspects of mathematical models in logic and computer science, which, if any, covers the range of uses to which this terms is put in technical and non-technical contexts? The relationship between classical and algorithmic information theory is relatively well-understood but neither pretends to have much to say about the concept of information content, as used by logicians and philosophers. In recent years, ideas from game theory, modal logic and proof theory have been shown to be closely aligned, further suggesting the possibility of a general theory of information that does justice to the use of information-talk in these fields. In this presentation, I will survey some ideas about information that strike me as significant and assess the possibility of modelling them in a common framework.
---------------------------------------------------------------------------------------------------------------------

 

Miguel Sicart (ITU Copenhagen)

Bots with Virtues: Digital Games, Artificial Agency, and Ethics

One of the most interesting topics in Information Ethics concerns artificial agency and the moral status of artificial agents. Information Ethics theorizes that artificial agents are also worth of a moral status that makes them both ethically accountable and subject to ethical harm. However, the work on ethical agency has, in my opinion, limited itself to purely theoretical research, not necessarily proposing how morally-aware artificial agents can be designed and implemented.

In this presentation I will argue that digital games are a perfect environment for developing and testing ethically-aware information agents. I will argue for an ontology of digital games that opens up the possibility of designing agents that can arguable be ethically aware, within the constrained domain of the game situation. In this presentation I claim that artificial agency is possible within the boundaries of digital games, opening up the possibility of designing ethically-aware artificial agency.
---------------------------------------------------------------------------------------------------------------------

 

Mariarosaria Taddeo (Oxford)
An information-based solution for the puzzle of testimony and trust

The paper analyses the relation between testimony and trust  endorsing an informational perspective and a definition of trust as a second-order property of first-order relations (that I provided elsewhere). The paper develops its analysis using a model of artificial distributed system compounded by purely rational artificial agents.

 The informational perspective allows one to consider the whole spectrum of occurrences of testimony, and hence to highlight the importance of this phenomenon in the dynamics of artificial distributed systems. The analysis of trust as a second-order property makes it possible to explain how testimony affects the interactions of the artificial agents of a system and the transmission of knowledge among them.

The main thesis defended in the paper is that testimony transmits information and testimony is an occurrence of a first-order relation of communication affected by the second-order property of trust. On the basis of this analysis, it is shown that the acquisition of knowledge through testimony occurs if and only if trust in the sender of the message is grounded on the /objectively/ assessed reliability of the sender. Objectively is emphasised because such reliability is in turn analysed pragmatically, in terms of the success of the sender’s past informational acts. This success depends on the truthfulness of the information transmitted.
---------------------------------------------------------------------------------------------------------------------

 

Matteo Turilli (Oxford)
Design as a modality of knowledge acquisition
The paper focuses on the concept of design in computer science and contributes to the field of the Philosophy of Information. Specifically, the relationship between software design and philosophy of information is investigated in the context of an epistemology of  information. It is argued that the cognate concepts of design, abstraction and refinement can be used to explain a specific modality of knowledge acquisition. The paper is divided into four parts. In the first part, different definitions of design are critically assessed by focusing on the relationship between design, abstraction and efinement in computer science. In the second part, the problem of the nature of the objects of design is discussed. It is argued that design is a goal-oriented effort that structures data into coherent abstractions. In the third part, the results previously obtained are used to underline the normative nature of an act of design and to illustrate its relationship with the iterative process underlying the acquisition of knowledge. The paper closes with the analysis of how this approach relates to the network theory of account and the correctness theory of truth.
---------------------------------------------------------------------------------------------------------------------

 

Raymond Turner (Essex, member scientific committee)
Computation with deduction

That computing science is somehow a mathematical activity was a view held by many of the pioneers of the subject, especially those who were concerned with its foundations. There are two well-known interpretations of this claim.

 

The first is due to Tony Hoare and insists that programming is a deductive activity in which programs are formal objects subject to mathematical analysis.

'Computer programming is an exact science in that all the properties of a program and all the consequences of executing it in any given environment can, in principle, be found out from the text of the program itself by means of purely deductive reasoning.' (Hoare, 1969)

The second is due to Christopher Strachey and concerns the mathematical status of actual programming languages.


`I am not only temperamentally a Platonist and prone to talking about abstracts if I think they throw light on a discussion, but I also regard syntactical problems as essentially irrelevant to programming languages at their present state of development. In a rough and ready sort of way, it seems to be fair to think of the semantics as being what we want to say and the syntax as how to say it. In these terms the urgent task in programming languages is to explore the field of semantic possibilities….When we have discovered the main outlines and the principal peaks we can go about describing a suitable neat and satisfactory notation for them. But first we must try to get a better understanding of the processes of computing and their description in programming languages. In computing we have what I believe to be a new field of mathematics which is at least as important as that opened up by the discovery (or should it be invention) of the calculus.' (Strachey C. , 2000)

While the first claims that programming is a purely deductive activity, the second insists that programming languages are (semantically) mathematical notions. In this talk we shall explore and provide a critical assessment of these claims and their relationship. This will force us to look more carefully at the general nature and form of mathematical theories. It will also force us to examine the alternative perspectives in which programming and language design are taken to be engineering and/or empirical activities.

REFERENCES:
[1] Hoare, A. (1969). An Axiomatic Basis For Computer Programming. Communications of the ACM, Volume 12 / Number 10, 576-583.
[2] Strachey, C. (2000). Fundamental Concepts in Programming Languages. Higher-Order and Symbolic Computation, 11-49.
---------------------------------------------------------------------------------------------------------------------


Johan van Benthem (Amsterdam, member scientific committee)
Logic as a theory of information flow

Logic can be seen as a theory of meaningful information and its  transformations under actions of deduction, as well as observation and communication. We will first discuss which notions of information are involved here, and find three different basic stances, that reflect the diversity of 'information' in the sciences in general. Next, key information-producing actions are crucially performed by agents, and hence we will discuss the shape of a logical theory of information that studies three items in tandem: representations for information, events or processes that change these, and agents involved in such events for whom the information is meaningful. We end with links between this view and philosophical epistemology, as well as the foundations of computation.
---------------------------------------------------------------------------------------------------------------------

 

Jeroen van den Hoven (Delft)
Ethics and Information Technology in the 21st Century: the promise of Value Sensitive Design

 

Without Information and Communication Technologies many of the activities that we undertake in the 21st century in the world of trade, finance, transport, healthcare, science, education, administration, management, communication, energy supply, industrial production, defence, engineering and technology would be impossible. Computers have become a necessary condition for all of our large scale projects and complex endeavours.


I think that one of the central moral questions in the 21st century is not so much ‘what ought one to do?’, but ‘What ought to be designed?’ What people can do and be often depends on the way their physical and epistemic environment was designed. More specifically information systems, computers and networked environments provide affordances and constraints to users, they enable and disable. An important object of ethical analysis and moral attention should therefore be ‘designs’ and the way they affect someone’s agency and information position. How can we design the systems, institutions, infrastructures and ICT applications in the context of which users will be able to do what they ought to do and which will enable them to prevent what ought to be prevented? I argue that the method of ‘value sensitive design’ represents a mode of practical ethics of IT which allows one to design systems that are expressive of our fundamental values and value considerations.

---------------------------------------------------------------------------------------------------------------------

 

Jan van Leeuwen (Utrecht, scientific organizer)
Computation as unbounded process
Computation is arguably the most basic form of information processing. In the philosophy of computation, it is often understood as transforming information by some repeated systematic process, without constraining this a priori much further. For example, Goldreich describes computation as `a process that modifies an environment by the repeated application of a predetermined simple rule', and suggests that it can apply equally to a wide variety  of processes in `natural reality' and to the human-defined or -created processes in `artificial reality' which he associates with the world of computers and automation. It is thus conceivable that there are many more starting points for modelling computation than the time-tested approaches initiated more than seventy years ago by Turing and his contemporaries, especially in view of e.g. the unbounded operation, interactivity, and non-uniformity over time that characterises many computational mechanisms today.

In this lecture we will review some of the attempts to capture these modern phenomena in computation, only to converge rather quickly on a natural model of computation which subsumes several earlier theories and in which only the number of process switches during a computation counts as a measure of the computed effects. The approach (in progress) is inspired by agent-oriented computation, and not only simplifies several earlier theories but also allows the formulation at this level of another time-tested concept, namely non-determinism. It leads to a `determinism versus nondeterminism' question remotely akin to the P-versus-NP question in classical algorithmic computation. While the classical question is (and remains) open, it appears that in the new model the corresponding classes are not equal. We discuss the possible implications for the philosophy of computation and what road ahead there may be for further studies of computation as unbounded process.

[1] O. Goldreich, Computational complexity - A conceptual perspective, Cambridge University Press, Cambridge, 2008.
---------------------------------------------------------------------------------------------------------------------

 

Jiri Wiedermann (Prague)
Amorphous computing systems
Under various disguise, the idea of amorphous computing systems has  initially emerged in the sci-fi literature, cf. the 1957 novel “The Black Cloud” written by the astrophysicist Sir Fred Hoyle; or the 1999 Hugo Award novel “A Deepness in the Sky” by the mathematician and computer scientist Vernor Vinge where the advanced amorphous computing systems appear in the form of “localizers”. The contemporary engineering efforts for constructing such systems are represented, e.g., by the 2001 project of a “smart dust” by K.S J. Pister (University of California). Real bacteria represent an example of such systems in nature.

From a computational viewpoint, amorphous computing systems differ from the classical ones almost in every aspect: they consist of a set of simple processors or robots that can communicate wirelessly to a limited distance. The processors are randomly placed in a closed area or volume; in some applications they can move, either  actively, or passively (e.g., in a bloodstream). All processors are equal; they do not share global clock and do not have unique identifiers. How can such systems compute? Do such systems possess universal computing power? Do finite automata suit such task?

We present a generic model of such systems and its variants. The processors are modeled by timed probabilistic finite-state automata. In a “macro-sized” model, the automata communicate by a single-channel radio; in a “nano-sized” model, by molecular communication. We sketch the main ideas leading to the design of probabilistic communication protocols and to the emergence of communication networks and indicate some open problems. We show that families of all systems under consideration possess universal computing power.

REFERENCES:
[1] L. Petru: Universality in Amorphous Computing.  PhD Dissertation Thesis (submitted), Faculty of Mathematics and Physics, Charles  UniversityPraha, 2009.
[2] J. Wiedermann, L. Petru: Computability in Amorphous Structures. In: Proc. CiE 2007, Computation and Logic in the Real World. LNCS Vol. 4497, Springer, pp. 781-790,  2007.
[3] J. Wiedermann, L. Petru. Communicating Mobile Nano-Machines and Their Computational Power. In: Third International ICST Conference, NanoNet 2008, Boston, MA, USA, September 14-16, 2008, Revised Selected Papers, LNICST Vol. 3, Part 2, Springer, pp. 123-130, 2009.
[4] J. Wiedermann, L. Petru: On the Universal Computing Power of Amorphous Computing Systems. Theory of Computing Systems 46:4 (2009), 995-1010, www.springerlink.com/content/k2x6266k78274m05/fulltext.pdf



   [Back]