28 March 2025

🏷️Knowledge Representation: On Causal Maps (Quotes)

 "[…] learning consists not in stimulus-response connections but in the building up in the nervous system of sets which function like cognitive maps […] such cognitive maps may be usefully characterized as varying from a narrow strip variety to a broader comprehensive variety." (Edward C Tolman, "Cognitive maps in rats and men", 1948)

"A person is changed by the contingencies of reinforcement under which he behaves; he does not store the contingencies. In particular, he does not store copies of the stimuli which have played a part in the contingencies. There are no 'iconic representations' in his mind; there are no 'data structures stored in his memory'; he has no 'cognitive map' of the world in which he has lived. He has simply been changed in such a way that stimuli now control particular kinds of perceptual behavior." (Burrhus F Skinner, "About behaviorism", 1974)

"A cognitive map is a specific way of representing a person's assertions about some limited domain, such as a policy problem. It is designed to capture the structure of the person's causal assertions and to generate the consequences that follow front this structure. […]  a person might use his cognitive map to derive explanations of the past, make predictions for the future, and choose policies in the present." (Robert M Axelrod, "Structure of Decision: The cognitive maps of political elites", 1976)

"The concepts a person uses are represented as points, and the causal links between these concepts are represented as arrows between these points. This gives a pictorial representation of the causal assertions of a person as a graph of points and arrows. This kind of representation of assertions as a graph will be called a cognitive map. The policy alternatives, all of the various causes and effects, the goals, and the ultimate utility of the decision maker can all be thought of as concept variables, and represented as points in the cognitive map. The real power of this approach ap pears when a cognitive map is pictured in graph form; it is then relatively easy to see how each of the concepts and causal relation ships relate to each other, and to see the overall structure of the whole set of portrayed assertions." (Robert Axelrod, "The Cognitive Mapping Approach to Decision Making" [in "Structure of Decision: The Cognitive Maps of Political Elites"], 1976)

"The cognitive map is not a picture or image which 'looks like' what it represents; rather, it is an information structure from which map-like images can be reconstructed and from which behaviour dependent upon place information can be generated." (John O'Keefe & Lynn Nadel, "The Hippocampus as a Cognitive Map", 1978)

"A fuzzy cognitive map or FCM draws a causal picture. It ties facts and things and processes to values and policies and objectives. And it lets you predict how complex events interact and play out. [...] Neural nets give a shortcut to tuning an FCM. The trick is to let the fuzzy causal edges change as if they were synapses in a neural net. They cannot change with the same math laws because FCM edges stand for causal effect not signal flow. We bombard the FCM nodes with real data. The data state which nodes are on or off and to which degree at each moment in time. Then the edges grow among the nodes."  (Bart Kosko, "Fuzzy Thinking: The new science of fuzzy logic", 1993)

"Under the label 'cognitive maps', mental models have been conceived of as the mental representation of spatial aspects of the environment. A mental model, in this sense, comprises the topology of an area, including relevant districts, landmarks, and paths." (Gert Rickheit & Lorenz Sichelschmidt, "Mental Models: Some Answers, Some Questions, Some Suggestions", 1999)

"Bounded rationality simultaneously constrains the complexity of our cognitive maps and our ability to use them to anticipate the system dynamics. Mental models in which the world is seen as a sequence of events and in which feedback, nonlinearity, time delays, and multiple consequences are lacking lead to poor performance when these elements of dynamic complexity are present. Dysfunction in complex systems can arise from the misperception of the feedback structure of the environment. But rich mental models that capture these sources of complexity cannot be used reliably to understand the dynamics. Dysfunction in complex systems can arise from faulty mental simulation-the misperception of feedback dynamics. These two different bounds on rationality must both be overcome for effective learning to occur. Perfect mental models without a simulation capability yield little insight; a calculus for reliable inferences about dynamics yields systematically erroneous results when applied to simplistic models." (John D Sterman, "Business Dynamics: Systems thinking and modeling for a complex world", 2000)

"Even if our cognitive maps of causal structure were perfect, learning, especially double-loop learning, would still be difficult. To use a mental model to design a new strategy or organization we must make inferences about the consequences of decision rules that have never been tried and for which we have no data. To do so requires intuitive solution of high-order nonlinear differential equations, a task far exceeding human cognitive capabilities in all but the simplest systems."  (John D Sterman, "Business Dynamics: Systems thinking and modeling for a complex world", 2000)

"The robustness of the misperceptions of feedback and the poor performance they cause are due to two basic and related deficiencies in our mental model. First, our cognitive maps of the causal structure of systems are vastly simplified compared to the complexity of the systems themselves. Second, we are unable to infer correctly the dynamics of all but the simplest causal maps. Both are direct consequences of bounded rationality, that is, the many limitations of attention, memory, recall, information processing capability, and time that constrain human decision making." (John D Sterman, "Business Dynamics: Systems thinking and modeling for a complex world", 2000)

"A causal map is an abstract representation of the causal relationships among kinds of objects and events in the world. Such relationships are not, for the most part, directly observable, but they can often be accurately inferred from observations. This includes both observations of patterns of contingency and correlation among events as well as observations of the effects of experimental interventions. We can think of everyday theories and theory-formation processes as cognitive systems that allow us to recover an accurate causal map of the world." (Alison Gopnik & Clark Glymour, "Causal maps and Bayes nets: a cognitive and computational account of theory-formation" [in "The cognitive basis of science"], 2002)

"Causal mapping is a technique that is used to elicit and represent domain knowledge of experts in the form of a graphical network called a causal map. A causal map (also called an influence diagram or a cause map) is a directed graph in which causal concepts (or nodes) represent the important variables that make up a domain. Causal connections are the directed arrows that connect these concepts to represent causal relationships between the variables." (S Nadkarni, Aggregated causal maps: An approach to elicit and aggregate the knowledge of multiple experts, 2003) 

"Eliciting and mapping the participant's mental models, while necessary, is far from sufficient [...] the result of the elicitation and mapping process is never more than a set of causal attributions, initial hypotheses about the structure of a system, which must then be tested. Simulation is the only practical way to test these models. The complexity of the cognitive maps produced in an elicitation workshop vastly exceeds our capacity to understand their implications. Qualitative maps are simply too ambiguous and too difficult to simulate mentally to provide much useful information on the adequacy of the model structure or guidance about the future development of the system or the effects of policies." (John D Sterman, "Learning in and about complex systems", Systems Thinking Vol. 3 2003)

"When an individual uses causal mapping to help clarify their own thinking, we call this technique cognitive mapping, because it is related to personal thinking or cognition. When a group maps their own ideas, we call it oval mapping, because we often use oval-shaped cards to record individuals’ ideas so that they can be arranged into a group’s map. Cognitive maps and oval maps can be used to create a strategic plan, because the maps include goals, strategies and actions, just like strategic plans." (John M Bryson et al, "Visible Thinking: Unlocking Causal Mapping For Practical Business Results", 2004)

24 March 2025

🏷️Knowledge Representation: On Mind Maps (Quotes)

"A mind map harnesses the full range of cortical skills—word, image, number, logic, rhythm, color, and spatial awareness - in a single, uniquely powerful technique. In doing so, it gives you the freedom to roam the infinite expanse of your brain." (Tony Buzan, Barry Buzan, "The Mind Map Book: How to Use Radiant Thinking to Maximize Your Brain's Untapped Potential", 1996)

"Delay time, the time between causes and their impacts, can highly influence systems. Yet the concept of delayed effect is often missed in our impatient society, and when it is recognized, it’s almost always underestimated. Such oversight and devaluation can lead to poor decision making as well as poor problem solving, for decisions often have consequences that don’t show up until years later. Fortunately, mind mapping, fishbone diagrams, and creativity/brainstorming tools can be quite useful here." (Stephen G Haines, "The Manager's Pocket Guide to Strategic and Business Planning", 1998)

"An effective mind map is one that works for you and therefore it is your tailoring and your emphasis, images, colours, codes and style that will determine its effectiveness. Try to develop the habit of taking down all your notes in mind map format. If you are required to give presentations, do this from a mind map. When you are at meetings, take down the minutes in mind map layout and just notice the difference in your ability to retain exactly what happened at that meeting and compare it with your usual logical/analytical method of recording minutes." (Peter F Haddon, Mastering Personal and Interpersonal Skills, 1999)

"Mind mapping is a technique whereby information is summarised in a form of pictorial representation which depends very much on the creativity of the individual involved. The idea is that when information is pictured in colourful word associations backed up by sketches or even stick drawings of the key words, it is far more easily remembered, much like when looking at a photograph you can recall in detail the happenings that led up to and followed the incident." (Peter F Haddon, Mastering Personal and Interpersonal Skills, 1999)

"Knowledge maps are node-link representations in which ideas are located in nodes and connected to other related ideas through a series of labeled links. They differ from other similar representations such as mind maps, concept maps, and graphic organizers in the deliberate use of a common set of labeled links that connect ideas. Some links are domain specific (e.g., function is very useful for some topic domains...) whereas other links (e.g., part) are more broadly used. Links have arrowheads to indicate the direction of the relationship between ideas." (Angela M O’Donnell et al, "Knowledge Maps as Scaffolds for Cognitive Processing", Educational Psychology Review Vol. 14 (1), 2002) 

"Mind Mapping uses the full range of the brain's abilities, placing an image in the center of the page in order to facilitate memorization and the creative generation of ideas, and subsequently branches out in associative networks that mirror externally the brain's internal structures. By using this approach, the preparation of speeches can be reduced in time from days to minutes; problems can be solved both more comprehensively and more rapidly; memory can be improved from absent to perfect; and creative thinkers can generate a limitless number of ideas rather than a truncated list." Marshall Goldsmith et al, "The Many Facets of Leadership", 2002)

"[a mind map is a] "visual note-taking process that includes key words and pictures illustrating the relationships among concepts." (Ruth Colvin Clark, Chopeta Lyons, "Graphics for Learning: Proven guidelines for planning, designing, and evaluating visuals in training materials" 2nd ed., 2011)

"Data visualizations can also play a critical role when it is time to disseminate and communicate evaluation findings. Data visualization engages and supports program stakeholders by increasing their capacity to understand data and participate in the evaluation process. Collaboratively developed mind maps, logic models, and graphic illustrations can facilitate understanding of the findings and their implications by depicting a program’s most important activities, outcomes, and ultimate goal in a concise and clear manner. Well-designed interactive visualizations for reporting and community engagement help stakeholders answer questions of import within context and place engaged stakeholders in the driver’s seat in terms of defining variables and interpreting results." (Tarek Azzam et al, "Data Visualization and Evaluation", "Data visualization, part 1: New Directions for Evaluation", 139], 2013)

"Paradoxically one of the greatest advantages of mind maps is that they are seldom needed again. The very act of constructing a map is itself so effective in fixing ideas in memory that very often a whole map can recalled without going back to it at all. A mind map is so strongly visual and uses so many of the natural functions of memory that frequently it can be simply read off in the mind's eye." (Peter Russell, "The Brain Book: Know Your Own Mind and How to Use it", 2013)

"With the adoption of a more schematic and abstract construct, deprived of realistic arboreal features, a tree diagram could sometimes be rotated along its axis and depicted horizontally, with its ranks arranged most frequently from left to right. Horizontal trees probably emerged as an alternative to vertical trees to address spatial constraints and layout requirements, but they also provide unique advantages. The nesting arrangement of horizontal trees resembles the grammatical construct of a sentence, echoing a natural reading pattern that anyone can relate to. This alternative scheme was often deployed on facing pages of a manuscript, with the root of the tree at the very center, creating a type of mirroring effect that is still found in many digital and interactive executions. Horizontal trees have proved highly efficient for archetypal models such as classification trees, flow charts, mind maps, dendrograms, and, notably, in the display of files on several software applications and operating systems." (Manuel Lima, "The Book of Trees: Visualizing Branches of Knowledge", 2014)

"Essentially, a mind map is a type of node-link diagram in which the nodes represent concepts and the links represent relationships between concepts. The central idea to be explored is placed in the middle of the page and it is expanded out from there. Usually mind maps are drawn as tree structures with no cross links between branches, but this can be restrictive." (Colin Ware, "Information Visualization: Perception for Design" 4th Ed., 2021)

"The educational use of mind maps and concept maps would seem to fit well with constructivist theory. To construct such a map, students must actively draw out links between various concepts as they understand them. The problem is that the cognitive engagement tends to be somewhat superficial for mind maps, since it does not require that students think deeply about the nature of the links." (Colin Ware, "Information Visualization: Perception for Design" 4th Ed., 2021)

"Idea mapping offers the power to represent qualitative data, describe relationships, and enable one to see the 'big picture'. Further, mapping allows us to represent data in a way that facilitates the conceptualizing of its meaning. It provides a 'map', which makes it possible to observe macrophenomena, discover trends, and generate creative options. Idea mapping makes it possible to represent multiple dimensions of a situation without losing sight of any of its parts; it is an efficient way to manage an overwhelming amount of qualitative information. Finally, it offers a way to present information to clients in a graphic form that is both easy to understand and data rich. Often, an entire strategic plan can be represented in one map." (Terry Moore)

30 January 2025

❄️Systems Thinking: On Sustainability (Quotes)

"Nature, displayed in its full extent, presents us with an immense tableau, in which all the order of beings are each represented by a chain which sustains a continuous series of objects, so close and so similar that their difference would be difficult to define. This chain is not a simple thread which is only extended in length, it is a large web or rather a network, which, from interval to interval, casts branches to the side in order to unite with the networks of another order." (Comte Georges-Louis Leclerc de Buffon, "Les Oiseaux Qui Ne Peuvent Voler", Histoire Naturelle des Oiseaux Vol. I, 1770)

"These, then, are some of the basic principles of ecology - interdependence, recycling, partnership, flexibility, diversity, and, as a consequence of all those, sustainability... the survival of humanity will depend on our ecological literacy, on our ability to understand these principles of ecology and live accordingly." (Fritjof Capra, "The Web of Life", 1996)

"The key to understanding the future is one word: sustainability." (Patrick Dixon, "Futurewise", 1998)

"Organizations need to undergo fundamental changes, both in order to adapt to the new business environment and to become ecologically sustainable." (Fritjof Capra, "The Hidden Connections", 2002)

"There exists an alternative to reductionism for studying systems. This alternative is known as holism. Holism considers systems to be more than the sum of their parts. It is of course interested in the parts and particularly the networks of relationships between the parts, but primarily in terms of how they give rise to and sustain in existence the new entity that is the whole whether it be a river system, an automobile, a philosophical system or a quality system." (Michael C Jackson, "Systems Thinking: Creative Holism for Manager", 2003)

"This new model of development would be based clearly on the goal of sustainable human well-being. It would use measures of progress that clearly acknowledge this goal. It would acknowledge the importance of ecological sustainability, social fairness, and real economic efficiency. Ecological sustainability implies recognizing that natural and social capital are not infinitely substitutable for built and human capital, and that real biophysical limits exist to the expansion of the market economy." (Robert Costanza, "Toward a New Sustainable Economy", 2008)

“Sustainability encompasses both financial sustainability (the ability to generate resources to meet the needs of the present without compromising the future) and programmatic sustainability (the ability to develop, mature, and cycle out programs to be responsive to constituencies over time).” (Jan Masaoka et al, "Nonprofit Sustainability", 2010)

"The term (Sustainability) has become so widely used that it is in danger of meaning nothing. It has been applied to all manner of activities in an effort to give those activities the gloss of moral imperative, the cachet of environmental enlightenment. 'Sustainable' has been used variously to mean 'politically feasible', 'economically feasible', 'not part of a pyramid or bubble' ,' socially enlightened', 'consistent with neoconservative small-government dogma' ,' consistent with liberal principles of justice and fairness', 'morally desirable' , and, at its most diffuse, 'sensibly far-sighted'.” (Eric Zencey, "Theses on Sustainability", Orion, 2010) 

"To find our steady state and solve the sustainability puzzle, we need to abandon the relentless quest for dominance. We need to abandon our visions of progress as growth... Only progress in diversity, equality, and beauty can stand the test of time. We need to live within our limits." (Steve Hallett,"The Efficiency Trap", 2013)

"To remedy chaotic situations requires a chaotic approach, one that is non-linear, constantly morphing, and continually sharpening its competitive edge with recurring feedback loops that build upon past experiences and lessons learned. Improvement cannot be sustained without reflection. Chaos arises from myriad sources that stem from two origins: internal chaos rising within you, and external chaos being imposed upon you by the environment. The result of this push/pull effect is the disequilibrium [...]." (Jeff Boss, "Navigating Chaos: How to Find Certainty in Uncertain Situations", 2015)

"The goal of a system dynamics approach is to understand how a dynamic pattern of behaviour is generated by a system and to find leverage points within the system structure that have the potential to change the problematic trend to a more desirable one. The key steps in a system dynamics approach are identifying one or more trends that characterise the problem, describing the structure of the system generating the behaviour and finding and testing leverage points in the system to change the problematic behaviour. System dynamics is an appropriate modelling approach for sustainability questions because of the long-term perspective and feedback dynamics inherent in such questions." (Bilash K Bala et al, "System Dynamics: Modelling and Simulation", 2017)

24 January 2025

❄️Systems Thinking: On Connectedness (Quotes)

"The first attempts to consider the behavior of so-called 'random neural nets' in a systematic way have led to a series of problems concerned with relations between the 'structure' and the 'function' of such nets. The 'structure' of a random net is not a clearly defined topological manifold such as could be used to describe a circuit with explicitly given connections. In a random neural net, one does not speak of 'this' neuron synapsing on 'that' one, but rather in terms of tendencies and probabilities associated with points or regions in the net." (Anatol Rapoport, "Cycle distributions in random nets", The Bulletin of Mathematical Biophysics 10(3), 1948)

"A NETWORK is a collection of connected lines, each of which indicates the movement of some quantity between two locations. Generally, entrance to a network is via a source (the starting point) and exit from a network is via a sink (the finishing point); the lines which form the network are called links (or arcs), and the points at which two or more links meet are called nodes." (Cecil W Lowe, "Critical Path Analysis by Bar Chart", 1966)

"The essential vision of reality presents us not with fugitive appearances but with felt patterns of order which have coherence and meaning for the eye and for the mind. Symmetry, balance and rhythmic sequences express characteristics of natural phenomena: the connectedness of nature - the order, the logic, the living process. Here art and science meet on common ground." (Gyorgy Kepes, "The New Landscape: In Art and Science", 1956)

"In fact, it is empirically ascertainable that every event is actually produced by a number of factors, or is at least accompanied by numerous other events that are somehow connected with it, so that the singling out involved in the picture of the causal chain is an extreme abstraction. Just as ideal objects cannot be isolated from their proper context, material existents exhibit multiple interconnections; therefore the universe is not a heap of things but a system of interacting systems." (Mario Bunge, "Causality: The place of the casual principles in modern science", 1959)

"To say a system is 'self-organizing' leaves open two quite different meanings. There is a first meaning that is simple and unobjectionable. This refers to the system that starts with its parts separate (so that the behavior of each is independent of the others' states) and whose parts then act so that they change towards forming connections of some type. Such a system is 'self-organizing' in the sense that it changes from 'parts separated' to 'parts joined'. […] In general such systems can be more simply characterized as 'self-connecting', for the change from independence between the parts to conditionality can always be seen as some form of 'connection', even if it is as purely functional […]  'Organizing' […] may also mean 'changing from a bad organization to a good one' […] The system would be 'self-organizing' if a change were automatically made to the feedback, changing it from positive to negative; then the whole would have changed from a bad organization to a good." (W Ross Ashby, "Principles of the self-organizing system", 1962)

"Information is recorded in vast interconnecting networks. Each idea or image has hundreds, perhaps thousands, of associations and is connected to numerous other points in the mental network." (Peter Russell, "The Brain Book: Know Your Own Mind and How to Use it", 1979)

"All certainty in our relationships with the world rests on acknowledgement of causality. Causality is a genetic connection of phenomena through which one thing (the cause) under certain conditions gives rise to, causes something else (the effect). The essence of causality is the generation and determination of one phenomenon by another." (Alexander Spirkin, "Dialectical Materialism", 1983)

"When loops are present, the network is no longer singly connected and local propagation schemes will invariably run into trouble. [...] If we ignore the existence of loops and permit the nodes to continue communicating with each other as if the network were singly connected, messages may circulate indefinitely around the loops and process may not converges to a stable equilibrium. […] Such oscillations do not normally occur in probabilistic networks […] which tend to bring all messages to some stable equilibrium as time goes on. However, this asymptotic equilibrium is not coherent, in the sense that it does not represent the posterior probabilities of all nodes of the network." (Judea Pearl, "Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference", 1988)

"A self-organizing system not only regulates or adapts its behavior, it creates its own organization. In that respect it differs fundamentally from our present systems, which are created by their designer. We define organization as structure with function. Structure means that the components of a system are arranged in a particular order. It requires both connections, that integrate the parts into a whole, and separations that differentiate subsystems, so as to avoid interference. Function means that this structure fulfils a purpose." (Francis Heylighen & Carlos Gershenson, "The Meaning of Self-organization in Computing", IEEE Intelligent Systems, 2003)

"Nodes and connectors comprise the structure of a network. In contrast, an ecology is a living organism. It influences the formation of the network itself." (George Siemens, "Knowing Knowledge", 2006)

"If a network is solely composed of neighborhood connections, information must traverse a large number of connections to get from place to place. In a small-world network, however, information can be transmitted between any two nodes using, typically, only a small number of connections. In fact, just a small percentage of random, long-distance connections is required to induce such connectivity. This type of network behavior allows the generation of 'six degrees of separation' type results, whereby any agent can connect to any other agent in the system via a path consisting of only a few intermediate nodes." (John H Miller & Scott E Page, "Complex Adaptive Systems", 2007)

"Networks may also be important in terms of view. Many models assume that agents are bunched together on the head of a pin, whereas the reality is that most agents exist within a topology of connections to other agents, and such connections may have an important influence on behavior. […] Models that ignore networks, that is, that assume all activity takes place on the head of a pin, can easily suppress some of the most interesting aspects of the world around us. In a pinhead world, there is no segregation, and majority rule leads to complete conformity - outcomes that, while easy to derive, are of little use." (John H Miller & Scott E Page, "Complex Adaptive Systems", 2007)

"Complexity theory embraces things that are complicated, involve many elements and many interactions, are not deterministic, and are given to unexpected outcomes. […] A fundamental aspect of complexity theory is the overall or aggregate behavior of a large number of items, parts, or units that are entangled, connected, or networked together. […] In contrast to classical scientific methods that directly link theory and outcome, complexity theory does not typically provide simple cause-and-effect explanations." (Robert E Gunther et al, "The Network Challenge: Strategy, Profit, and Risk in an Interlinked World", 2009)

"The simplest basic architecture of an artificial neural network is composed of three layers of neurons - input, output, and intermediary (historically called perceptron). When the input layer is stimulated, each node responds in a particular way by sending information to the intermediary level nodes, which in turn distribute it to the output layer nodes and thereby generate a response. The key to artificial neural networks is in the ways that the nodes are connected and how each node reacts to the stimuli coming from the nodes it is connected to. Just as with the architecture of the brain, the nodes allow information to pass only if a specific stimulus threshold is passed. This threshold is governed by a mathematical equation that can take different forms. The response depends on the sum of the stimuli coming from the input node connections and is 'all or nothing'." (Diego Rasskin-Gutman, "Chess Metaphors: Artificial Intelligence and the Human Mind", 2009)

"System dynamics is an approach to understanding the behaviour of over time. It deals with internal feedback loops and time delays that affect the behaviour of the entire system. It also helps the decision maker untangle the complexity of the connections between various policy variables by providing a new language and set of tools to describe. Then it does this by modeling the cause and effect relationships among these variables." (Raed M Al-Qirem & Saad G Yaseen, "Modelling a Small Firm in Jordan Using System Dynamics", 2010)

"We are beginning to see the entire universe as a holographically interlinked network of energy and information, organically whole and self-referential at all scales of its existence. We, and all things in the universe, are non-locally connected with each other and with all other things in ways that are unfettered by the hitherto known limitations of space and time." (Ervin László, "Cosmos: A Co-creator's Guide to the Whole-World", 2010)

"Information is recorded in vast interconnecting networks. Each idea or image has hundreds, perhaps thousands, of associations and is connected to numerous other points in the mental network." (Peter Russell, "The Brain Book: Know Your Own Mind and How to Use it", 2013) 

23 January 2025

❄️Systems Thinking: On Boundaries (Quotes)

"A state of equilibrium in a system does not mean, further, that the system is without tension. Systems can, on the contrary, also come to equilibrium in a state of tension (e.g., a spring under tension or a container with gas under pressure).The occurrence of this sort of system, however, presupposes a certain firmness of boundaries and actual segregation of the system from its environment (both of these in a functional, not a spatial, sense). If the different parts of the system are insufficiently cohesive to withstand the forces working toward displacement (i.e., if the system shows insufficient internal firmness, if it is fluid), or if the system is not segregated from its environment by sufficiently firm walls but is open to its neighboring systems, stationary tensions cannot occur. Instead, there occurs a process in the direction of the forces, which encroaches upon the neighboring regions with diffusion of energy and which goes in the direction of an equilibrium at a lower level of tension in the total region. The presupposition for the existence of a stationary state of tension is thus a certain firmness of the system in question, whether this be its own inner firmness or the firmness of its walls." (Kurt Lewin, "A Dynamic Theory of Personality", 1935)

"A system is difficult to define, but it is easy to recognize some of its characteristics. A system possesses boundaries which segregate it from the rest of its field: it is cohesive in the sense that it resists encroachment from without […]" (Marvin G Cline, "Fundamentals of a theory of the self: some exploratory speculations‎", 1950)

"In the minds of many writers systems engineering is synonymous with component selection and interface design; that is, the systems engineer does not design hardware but decides what types of existing hardware shall be coupled and how they shall be coupled. Complete agreement that this function is the essence of systems engineering will not be found here, for, besides the very important function of systems engineering in systems analysis, there is the role played by systems engineering in providing boundary conditions for hardware design." (A Wayne Wymore, "A Mathematical Theory of Systems Engineering", 1967)

"To model the dynamic behavior of a system, four hierarchies of structure should be recognized: closed boundary around the system; feedback loops as the basic structural elements within the boundary; level variables representing accumulations within the feedback loops; rate variables representing activity within the feedback loops." (Jay W Forrester, "Urban Dynamics", 1969)

"General systems theory is the scientific exploration of 'wholes' and 'wholeness' which, not so long ago, were considered metaphysical notions transcending the boundaries of science. Hierarchic structure, stability, teleology, differentiation, approach to and maintenance of steady states, goal-directedness - these are a few of such general system properties." (Ervin László, "Introduction to Systems Philosophy", 1972)

"Systems thinking is a special form of holistic thinking - dealing with wholes rather than parts. One way of thinking about this is in terms of a hierarchy of levels of biological organization and of the different 'emergent' properties that are evident in say, the whole plant (e.g. wilting) that are not evident at the level of the cell (loss of turgor). It is also possible to bring different perspectives to bear on these different levels of organization. Holistic thinking starts by looking at the nature and behaviour of the whole system that those participating have agreed to be worthy of study. This involves: (i) taking multiple partial views of 'reality' […] (ii) placing conceptual boundaries around the whole, or system of interest and (iii) devising ways of representing systems of interest." (C J Pearson and R L Ison, "Agronomy of Grassland Systems", 1987)

"Autopoietic systems, then, are not only self-organizing systems, they not only produce and eventually change their own structures; their self-reference applies to the production of other components as well. This is the decisive conceptual innovation. […] Thus, everything that is used as a unit by the system is produced as a unit by the system itself. This applies to elements, processes, boundaries, and other structures and, last but not least, to the unity of the system itself." (Niklas Luhmann, "The Autopoiesis of Social Systems", 1990)

"Systems, acting dynamically, produce (and incidentally, reproduce) their own boundaries, as structures which are complementary (necessarily so) to their motion and dynamics. They are liable, for all that, to instabilities chaos, as commonly interpreted of chaotic form, where nowadays, is remote from the random. Chaos is a peculiar situation in which the trajectories of a system, taken in the traditional sense, fail to converge as they approach their limit cycles or 'attractors' or 'equilibria'. Instead, they diverge, due to an increase, of indefinite magnitude, in amplification or gain." (Gordon Pask, "Different Kinds of Cybernetics", 1992)

"When a system has more than one attractor, the points in phase space that are attracted to a particular attractor form the basin of attraction for that attractor. Each basin contains its attractor, but consists mostly of points that represent transient states. Two contiguous basins of attraction will be separated by a basin boundary." (Edward N Lorenz, "The Essence of Chaos", 1993)

"To avoid policy resistance and find high leverage policies requires us to expand the boundaries of our mental models so that we become aware of and understand the implications of the feedbacks created by the decisions we make. That is, we must learn about the structure and dynamics of the increasingly complex systems in which we are embedded." (John D Sterman, "Business dynamics: Systems thinking and modeling for a complex world", 2000)

"[…] our mental models fail to take into account the complications of the real world - at least those ways that one can see from a systems perspective. It is a warning list. Here is where hidden snags lie. You can’t navigate well in an interconnected, feedback-dominated world unless you take your eyes off short-term events and look for long-term behavior and structure; unless you are aware of false boundaries and bounded rationality; unless you take into account limiting factors, nonlinearities and delays. You are likely to mistreat, misdesign, or misread systems if you don’t respect their properties of resilience, self-organization, and hierarchy." (Donella H Meadows, "Thinking in Systems: A Primer", 2008)

"You can’t navigate well in an interconnected, feedback-dominated world unless you take your eyes off short-term events and look for long term behavior and structure; unless you are aware of false boundaries and bounded rationality; unless you take into account limiting factors, nonlinearities and delays." (Donella H Meadow, "Thinking in Systems: A Primer", 2008)

18 January 2025

❄️Systems Thinking: On Loops (Quotes)

"A state of equilibrium in a system does not mean, further, that the system is without tension. Systems can, on the contrary, also come to equilibrium in a state of tension (e.g., a spring under tension or a container with gas under pressure).The occurrence of this sort of system, however, presupposes a certain firmness of boundaries and actual segregation of the system from its environment (both of these in a functional, not a spatial, sense). If the different parts of the system are insufficiently cohesive to withstand the forces working toward displacement (i.e., if the system shows insufficient internal firmness, if it is fluid), or if the system is not segregated from its environment by sufficiently firm walls but is open to its neighboring systems, stationary tensions cannot occur. Instead, there occurs a process in the direction of the forces, which encroaches upon the neighboring regions with diffusion of energy and which goes in the direction of an equilibrium at a lower level of tension in the total region. The presupposition for the existence of a stationary state of tension is thus a certain firmness of the system in question, whether this be its own inner firmness or the firmness of its walls." (Kurt Lewin, "A Dynamic Theory of Personality", 1935)

"A system is difficult to define, but it is easy to recognize some of its characteristics. A system possesses boundaries which segregate it from the rest of its field: it is cohesive in the sense that it resists encroachment from without […]" (Marvin G Cline, "Fundamentals of a theory of the self: some exploratory speculations‎", 1950)

"In the minds of many writers systems engineering is synonymous with component selection and interface design; that is, the systems engineer does not design hardware but decides what types of existing hardware shall be coupled and how they shall be coupled. Complete agreement that this function is the essence of systems engineering will not be found here, for, besides the very important function of systems engineering in systems analysis, there is the role played by systems engineering in providing boundary conditions for hardware design." (A Wayne Wymore, "A Mathematical Theory of Systems Engineering", 1967)

"To model the dynamic behavior of a system, four hierarchies of structure should be recognized: closed boundary around the system; feedback loops as the basic structural elements within the boundary; level variables representing accumulations within the feedback loops; rate variables representing activity within the feedback loops." (Jay W Forrester, "Urban Dynamics", 1969)

"General systems theory is the scientific exploration of 'wholes' and 'wholeness' which, not so long ago, were considered metaphysical notions transcending the boundaries of science. Hierarchic structure, stability, teleology, differentiation, approach to and maintenance of steady states, goal-directedness - these are a few of such general system properties." (Ervin László, "Introduction to Systems Philosophy", 1972)

"Systems thinking is a special form of holistic thinking - dealing with wholes rather than parts. One way of thinking about this is in terms of a hierarchy of levels of biological organization and of the different 'emergent' properties that are evident in say, the whole plant (e.g. wilting) that are not evident at the level of the cell (loss of turgor). It is also possible to bring different perspectives to bear on these different levels of organization. Holistic thinking starts by looking at the nature and behaviour of the whole system that those participating have agreed to be worthy of study. This involves: (i) taking multiple partial views of 'reality' […] (ii) placing conceptual boundaries around the whole, or system of interest and (iii) devising ways of representing systems of interest." (C J Pearson and R L Ison, "Agronomy of Grassland Systems", 1987)

"Autopoietic systems, then, are not only self-organizing systems, they not only produce and eventually change their own structures; their self-reference applies to the production of other components as well. This is the decisive conceptual innovation. […] Thus, everything that is used as a unit by the system is produced as a unit by the system itself. This applies to elements, processes, boundaries, and other structures and, last but not least, to the unity of the system itself." (Niklas Luhmann, "The Autopoiesis of Social Systems", 1990)

"Systems, acting dynamically, produce (and incidentally, reproduce) their own boundaries, as structures which are complementary (necessarily so) to their motion and dynamics. They are liable, for all that, to instabilities chaos, as commonly interpreted of chaotic form, where nowadays, is remote from the random. Chaos is a peculiar situation in which the trajectories of a system, taken in the traditional sense, fail to converge as they approach their limit cycles or 'attractors' or 'equilibria'. Instead, they diverge, due to an increase, of indefinite magnitude, in amplification or gain." (Gordon Pask, "Different Kinds of Cybernetics", 1992)

"When a system has more than one attractor, the points in phase space that are attracted to a particular attractor form the basin of attraction for that attractor. Each basin contains its attractor, but consists mostly of points that represent transient states. Two contiguous basins of attraction will be separated by a basin boundary." (Edward N Lorenz, "The Essence of Chaos", 1993)

"To avoid policy resistance and find high leverage policies requires us to expand the boundaries of our mental models so that we become aware of and understand the implications of the feedbacks created by the decisions we make. That is, we must learn about the structure and dynamics of the increasingly complex systems in which we are embedded." (John D Sterman, "Business dynamics: Systems thinking and modeling for a complex world", 2000)

"[…] our mental models fail to take into account the complications of the real world - at least those ways that one can see from a systems perspective. It is a warning list. Here is where hidden snags lie. You can’t navigate well in an interconnected, feedback-dominated world unless you take your eyes off short-term events and look for long-term behavior and structure; unless you are aware of false boundaries and bounded rationality; unless you take into account limiting factors, nonlinearities and delays. You are likely to mistreat, misdesign, or misread systems if you don’t respect their properties of resilience, self-organization, and hierarchy." (Donella H Meadows, "Thinking in Systems: A Primer", 2008)

"You can’t navigate well in an interconnected, feedback-dominated world unless you take your eyes off short-term events and look for long term behavior and structure; unless you are aware of false boundaries and bounded rationality; unless you take into account limiting factors, nonlinearities and delays." (Donella H Meadow, "Thinking in Systems: A Primer", 2008)

17 January 2025

❄️Systems Thinking: On Parsimony (Quotes)

"A mechanistic model has the following advantages: 1. It contributes to our scientific understanding of the phenomenon under study. 2. It usually provides a better basis for extrapolation (at least to conditions worthy of further experimental investigation if not through the entire range of all input variables). 3. It tends to be parsimonious (i. e, frugal) in the use of parameters and to provide better estimates of the response." (George E P Box, "Empirical Model-Building and Response Surfaces", 1987)

"It is part of the lore of science that the most parsimonious explanation of observed facts is to be preferred over convoluted and long-winded theories. Ptolemaic epicycles gave way to the Copernican system largely on this premise, and in general, scientific inquiry is governed by the oft-quoted dictum of the medieval cleric William of Occam that 'nunquam ponenda est pluralitas sine necesitate' , which may be paraphrased as 'choose the simplest explanation for the observed facts' ." (Edward Beltrami, "What is Random?: Chaos and Order in Mathematics and Life", 1999)

"A smaller model with fewer covariates has two advantages: it might give better predictions than a big model and it is more parsimonious (simpler). Generally, as you add more variables to a regression, the bias of the predictions decreases and the variance increases. Too few covariates yields high bias; this called underfitting. Too many covariates yields high variance; this called overfitting. Good predictions result from achieving a good balance between bias and variance. […] fiding a good model involves trading of fit and complexity." (Larry A Wasserman, "All of Statistics: A concise course in statistical inference", 2004)

"The model theory postulates that mental models are parsimonious. They represent what is possible, but not what is impossible, according to assertions. This principle of parsimony minimizes the load on working memory, and so it applies unless something exceptional occurs to overrule it." (Philip N Johnson-Laird, Mental Models, Sentential Reasoning, and Illusory Inferences, [in "Mental Models and the Mind"], 2006)

"Two systems concepts lie at the disposal of the architect to reflect the beauty of harmony: parsimony and variety. The law of parsimony states that given several explanations of a specific phenomenon, the simplest is probably the best. […] On the other hand, the law of requisite variety states that for a system to survive in its environment the variety of choice that the system is able to make must equal or exceed the variety of influences that the environment can impose on the system." (John Boardman & Brian Sauser, "Systems Thinking: Coping with 21st Century Problems", 2008)

"What advantages do diagrams have over verbal descriptions in promoting system understanding? First, by providing a diagram, massive amounts of information can be presented more efficiently. A diagram can strip down informational complexity to its cor. e - in this sense, it can result in a parsimonious, minimalist description of a system. Second, a diagram can help us see patterns in information and data that may appear disordered otherwise. For example, a diagram can help us see mechanisms of cause and effect or can illustrate sequence and flow in a complex system. Third, a diagram can result in a less ambiguous description than a verbal description because it forces one to come up with a more structured description." (Robbie T Nakatsu, "Diagrammatic Reasoning in AI", 2010)

"The idea of a machine thinking is by no means repugnant to all of us. In fact, I find the converse idea, that the human brain may itself be a machine which could be possibly duplicated functionally with inanimate objects, quite attractive. Until clearly disproved, this hypothesis concerning the brain seems the natural scientific one in line with the principle of parsimony, etc., rather than hypothecating intangible and unreachable 'vital forces' , 'souls' and the like." (Claude E Shannon)

10 January 2025

❄️Systems Thinking: On Principles (Quotes)

"[...] there is a universal principle, operating in every department of nature and at every stage of evolution, which is conservative, creative and constructive. [...] I have at last fixed upon the word synergy, as the term best adapted to express its twofold character of ‘energy’ and ‘mutuality’ or the systematic and organic ‘working together’ of the antithetical forces of nature. [...] Synergy is a synthesis of work, or synthetic work, and this is what is everywhere taking place. It may be said to begin with the primary atomic collision in which mass, motion, time, and space are involved, and to find its simplest expression in the formula for force, which implies a plurality of elements, and signifies an interaction of these elements." (Lester F Ward, "Pure Sociology", 1903)

"The true nature of the universal principle of synergy pervading all nature and creating all the different kinds of structure that we observe to exist, must now be made clearer. Primarily and essentially it is a process of equilibration, i.e., the several forces are first brought into a state of partial equilibrium. It begins in collision, conflict, antagonism, and opposition, and then we have the milder phases of antithesis, competition, and interaction, passing next into a modus vivendi, or compromise, and ending in collaboration and cooperation. […] The entire drift is toward economy, conservatism, and the prevention of waste." (James Q Dealey & Lester F Ward, "A Text-book of Sociology", 1905)

"[...] the concept of 'feedback', so simple and natural in certain elementary cases, becomes artificial and of little use when the interconnexions between the parts become more complex. When there are only two parts joined so that each affects the other, the properties of the feedback give important and useful information about the properties of the whole. But when the parts rise to even as few as four, if every one affects the other three, then twenty circuits can be traced through them; and knowing the properties of all the twenty circuits does not give complete information about the system. Such complex systems cannot be treated as an interlaced set of more or less independent feedback circuits, but only as a whole. For understanding the general principles of dynamic systems, therefore, the concept of feedback is inadequate in itself. What is important is that complex systems, richly cross-connected internally, have complex behaviours, and that these behaviours can be goal-seeking in complex patterns." (W Ross Ashby, "An Introduction to Cybernetics", 1956)

"The homeostatic principle does not apply literally to the functioning of all complex living systems, in that in counteracting entropy they move toward growth and expansion." (Daniel Katz, "The Social Psychology of Organizations", 1966)

"Traditional organizational theories have tended to view the human organization as a closed system. This tendency has led to a disregard of differing organizational environments and the nature of organizational dependency on environment. It has led also to an over-concentration on principles of internal organizational functioning, with consequent failure to develop and understand the processes of feedback which are essential to survival." (Daniel Katz, "The Social Psychology of Organizations", 1966)

"Cybernetics, based upon the principle of feedback or circular causal trains providing mechanisms for goal-seeking and self-controlling behavior." (Ludwig von Bertalanffy, "General System Theory", 1968)

"Perhaps the most important single characteristic of modern organizational cybernetics is this: That in addition to concern with the deleterious impacts of rigidly-imposed notions of what constitutes the application of good 'principles of organization and management' the organization is viewed as a subsystem of a larger system(s), and as comprised itself of functionally interdependent subsystems." (Richard F Ericson, "Organizational cybernetics and human values", 1969)  

"Open systems, in contrast to closed systems, exhibit a principle of equifinality, that is, a tendency to achieve a final state independent of initial conditions. In other words, open systems tend to 'resist' perturbations that take them away from some steady state. They can exhibit homeostasis." (Anatol Rapaport, "The Uses of Mathematical Isomorphism in General System Theory", 1972)

"[Hierarchy is] the principle according to which entities meaningfully treated as wholes are built up of smaller entities which are themselves wholes […] and so on. In hierarchy, emergent properties denote the levels." (Peter Checkland, "Systems Thinking, Systems Practice", 1981)

"Effect spreads its 'tentacles' not only forwards" (as a new cause giving rise to a new effect) but also backwards, to the cause which gave rise to it, thus modifying, exhausting or intensifying its force. This interaction of cause and effect is known as the principle of feedback. It operates everywhere, particularly in all self-organising systems where perception, storing, processing and use of information take place, as for example, in the organism, in a cybernetic device, and in society. The stability, control and progress of a system are inconceivable without feedback." (Alexander Spirkin, "Dialectical Materialism", 1983)

"A cardinal principle in systems theory is that all parties that have a stake in a system should be represented in its management." (Malcolm Knowles, "The Adult Learner: A Neglected Species", 1984)

"Systems thinking is a discipline for seeing wholes. It is a framework for seeing interrelationships rather than things, for seeing patterns of change rather than static 'snapshots'. It is a set of general principles- distilled over the course of the twentieth century, spanning fields as diverse as the physical and social sciences, engineering, and management. [...] During the last thirty years, these tools have been applied to understand a wide range of corporate, urban, regional, economic, political, ecological, and even psychological systems. And systems thinking is a sensibility for the subtle interconnectedness that gives living systems their unique character." (Peter Senge, "The Fifth Discipline", 1990)

"Systems thinking is a framework for seeing interrelationships rather than things, for seeing patterns rather than static snapshots. It is a set of general principles spanning fields as diverse as physical and social sciences, engineering and management." (Peter Senge, "The Fifth Discipline", 1990)

"Evolution is a technological, mathematical, informational, and biological process rolled into one. It could almost be said to be a law of physics, a principle that reigns over all created multitudes, whether they have genes or not." (Kevin Kelly, "Out of Control: The New Biology of Machines, Social Systems and the Economic World", 1995)

"System engineering is the art and science of creating effective systems, using whole system, whole life principles." (Derek Hitchins, 1995)

"The basic principle of an autocatalytic network is that even though nothing can make itself, everything in the pot has at least one reaction that makes it, involving only other things in the pot. It's a symbiotic system in which everything cooperates to make the metabolism work - the whole is greater than the sum of the parts." (J Doyne Farmer, "The Second Law of Organization" [in The Third Culture: Beyond the Scientific Revolution], 1995)

"Contrary to what happens at equilibrium, or near equilibrium, systems far from equilibrium do not conform to any minimum principle that is valid for functions of free energy or entropy production." (Ilya Prigogine, "The End of Certainty: Time, Chaos, and the New Laws of Nature", 1996) 

"These, then, are some of the basic principles of ecology - interdependence, recycling, partnership, flexibility, diversity, and, as a consequence of all those, sustainability... the survival of humanity will depend on our ecological literacy, on our ability to understand these principles of ecology and live accordingly." (Fritjof Capra, "The Web of Life", 1996)

"[…] swarm intelligence is becoming a valuable tool for optimizing the operations of various businesses. Whether similar gains will be made in helping companies better organize themselves and develop more effective strategies remains to be seen. At the very least, though, the field provides a fresh new framework for solving such problems, and it questions the wisdom of certain assumptions regarding the need for employee supervision through command-and-control management. In the future, some companies could build their entire businesses from the ground up using the principles of swarm intelligence, integrating the approach throughout their operations, organization, and strategy. The result: the ultimate self-organizing enterprise that could adapt quickly - and instinctively - to fast-changing markets." (Eric Bonabeau & Christopher Meyer, "Swarm Intelligence: A Whole New Way to Think About Business", Harvard Business Review, 2001)

"In complexity thinking the darkness principle is covered by the concept of incompressibility [...] The concept of incompressibility suggests that the best representation of a complex system is the system itself and that any representation other than the system itself will necessarily misrepresent certain aspects of the original system." (Kurt Richardson, "Systems theory and complexity: Part 1", Emergence: Complexity & Organization Vol.6" (3), 2004)

"The model theory postulates that mental models are parsimonious. They represent what is possible, but not what is impossible, according to assertions. This principle of parsimony minimizes the load on working memory, and so it applies unless something exceptional occurs to overrule it." (Philip N Johnson-Laird, Mental Models, Sentential Reasoning, and Illusory Inferences, [in "Mental Models and the Mind"], 2006)

"This reduction principle - the reduction of the behavior of a complex system to the behavior of its parts - is valid only if the level of complexity of the system is rather low." (Andrzej P Wierzbicki & Yoshiteru Nakamori, "Creative Space: Models of Creative Processes for the Knowledge Civilization Age", Studies in Computational Intelligence Vol.10, 2006)

"Principle of Equifinality: If a steady state is reached in an open system, it is independent of the initial conditions, and determined only by the system parameters, i.e. rates of reaction and transport." (Kevin Adams & Charles Keating, "Systems of systems engineering", 2012)

"In an information economy, entrepreneurs master the science of information in order to overcome the laws of the purely physical sciences. They can succeed because of the surprising power of the laws of information, which are conducive to human creativity. The central concept of information theory is a measure of freedom of choice. The principle of matter, on the other hand, is not liberty but limitation - it has weight and occupies space." (George Gilder, "Knowledge and Power: The Information Theory of Capitalism and How it is Revolutionizing our World", 2013)

"A key discovery of network science is that the architecture of networks emerging in various domains of science, nature, and technology are similar to each other, a consequence of being governed by the same organizing principles. Consequently we can use a common set of mathematical tools to explore these systems. " (Albert-László Barabási, "Network Science", 2016)

"Cybernetics is an interdisciplinary science. It originated ‘at the junction’ of mathematics, logic, semiotics, physiology, biology and sociology. Among its inherent features, we mention analysis and revelation of general principles and approaches in scientific cognition. Control theory, communication theory, operations research and others represent most weighty theories within cybernetics 1.0." (Dmitry A Novikov, "Cybernetics 2.0", 2016)

"The exploding interest in network science during the first decade of the 21st century is rooted in the discovery that despite the obvious diversity of complex systems, the structure and the evolution of the networks behind each system is driven by a common set of fundamental laws and principles. Therefore, notwithstanding the amazing differences in form, size, nature, age, and scope of real networks, most networks are driven by common organizing principles. Once we disregard the nature of the components and the precise nature of the interactions between them, the obtained networks are more similar than different from each other." (Albert-László Barabási, "Network Science", 2016)

25 December 2024

🦋Science: On Reinforcement Learning (Quotes)

"[reinforcement learning is a]  training paradigm where the neural network is presented with a sequence of input data, followed by a reinforcement signal." (Joseph P Bigus, "Data Mining with Neural Networks: Solving Business Problems from Application Development to Decision Support", 1996)

"[reinforcement learning is a] learning mode in which adaptive changes of the parameters due to reward or punishment depend on the final outcome of a whole sequence of behavior. The results of learning are evaluated by some performance index." (Teuvo Kohonen, "Self-Organizing Maps" 3rd Ed., 2001)

"[reinforcement learning is a] learning method which interprets feedback from an environment to learn optimal sets of condition/response relationships for problem solving within that environment" (Pi-Sheng Deng, "Genetic Algorithm Applications to Optimization Modeling", Encyclopedia of Artificial Intelligence, 2009)

"[reinforcement learning is a] sub-area of machine learning concerned with how an agent ought to take actions in an environment so as to maximize some notion of long-term reward. Reinforcement learning algorithms attempt to find a policy that maps states of the world to the actions the agent ought to take in those states. Differently from supervised learning, in this case there is no target value for each input pattern, only a reward based of how good or bad was the action taken by the agent in the existent environment." (Marley Vellasco et al, "Hierarchical Neuro-Fuzzy Systems" Part II, Encyclopedia of Artificial Intelligence, 2009)

"[reinforcement learning is a] a type of machine learning in which an agent learns, through its own experience, to navigate through an environment, choosing actions in order to maximize the sum of rewards." (Lisa Torrey & Jude Shavlik, "Transfer Learning",  2010)

"[reinforcement learning is a] a machine learning technique whereby actions are associated with credits or penalties, sometimes with delay, and whereby, after a series of learning episodes, the learning agent has developed a model of which action to choose in a particular environment, based on the expectation of accumulated rewards." (Apostolos Georgas, "Scientific Workflows for Game Analytics", Encyclopedia of Business Analytics and Optimization", 2014)

"[reinforcement learning is a]  type of machine learning in which the machine learns what to do by discovering through trial and error the way to maximize a reward." (Gloria Phillips-Wren, "Intelligent Systems to Support Human Decision Making", 2014)

"[reinforcement learning] stands, in the context of computational learning, for a family of algorithms aimed at approximating the best policy to play in a certain environment (without building an explicit model of it) by increasing the probability of playing actions that improve the rewards received by the agent." (Fernando S Oliveira, "Reinforcement Learning for Business Modeling", 2014)

"The knowledge is obtained using rewards and punishments which there is an agent (learner) that acts autonomously and receives a scalar reward signal that is used to evaluate the consequences of its actions." (Nuno Pombo et al, "Machine Learning Approaches to Automated Medical Decision Support Systems", 2015)

"It is also known as learning with a critic. The agent takes a sequence of actions and receives a reward/penalty only at the very end, with no feedback during the intermediate actions. Using this limited information, the agent should learn to generate the actions to maximize the reward in later trials. For example, in chess, we do a set of moves, and at the very end, we win or lose the game; so we need to figure out what the actions that led us to this result were and correspondingly credit them." (Ethem Alpaydın, "Machine learning : the new AI", 2016)

"[reinforcement learning is a] learning algorithm for a robot or a software agent to take actions in an environment so as to maximize the sum of rewards through trial and error." (Tomohiro Yamaguchi et al, "Analyzing the Goal-Finding Process of Human Learning With the Reflection Subtask", 2018)

"Training/learning method aiming to automatically determine the ideal behavior within a specific context based on rewarding desired behaviors and/or punishing undesired one." (Ioan-Sorin Comşa et al, "Guaranteeing User Rates With Reinforcement Learning in 5G Radio Access Networks", 2019)

"Brach of the Artificial Intelligence field devoted to obtaining optimal control sequences for agents only by interacting with a concrete dynamical system." (Juan Parras & Santiago Zazo, "The Threat of Intelligent Attackers Using Deep Learning: The Backoff Attack Case", 2020)

"Machine learning approaches often used in robotics. A reward is used to teach a system a desired behavior." (Jörg Frochte et al, "Concerning the Integration of Machine Learning Content in Mechatronics Curricula", 2020)

"This area of deep learning includes methods which iterates over various steps in a process to get the desired results. Steps that yield desirable outcomes are content and steps that yield undesired outcomes are reprimanded until the algorithm is able to learn the given optimal process. In unassuming terms, learning is finished on its own or effort on feedback or content-based learning." (Amit K Tyagi & Poonam Chahal, "Artificial Intelligence and Machine Learning Algorithms", 2020)

"Reinforcement learning is also a subset of AI algorithms which creates independent, self-learning systems through trial and error. Any positive action is assigned a reward and any negative action would result in a punishment. Reinforcement learning can be used in training autonomous vehicles where the goal would be obtaining the maximum rewards." (Vijayaraghavan Varadharajan & Akanksha Rajendra Singh, "Building Intelligent Cities: Concepts, Principles, and Technologies", 2021)

❄️Systems Thinking: On Postulates (Quotes)

"As we continue the great adventure of scientific exploration our models must often be recast. New laws and postulates will be required, while those that we already have must be broadened, extended and generalized in ways that we are now hardly able to surmise." (Gilbert Newton Lewis, "The Anatomy of Science", 1926)

"Postulate 1. All chance systems of causes are not alike in the sense that they enable us to predict the future in terms of the past. Postulate 2. Constant systems of chance causes do exist in nature. Postulate 3. Assignable causes of variation may be found and eliminated."(Walter A Shewhart, "Economic Control of Quality of Manufactured Product", 1931)

"The functional validity of a working hypothesis is not a priori certain, because often it is initially based on intuition. However, logical deductions from such a hypothesis provide expectations (so called prognoses) as to the circumstances under which certain phenomena will appear in nature. Such a postulate or working hypothesis can then be substantiated by additional observations or by experiments especially arranged to test details. The value of the hypothesis is strengthened if the observed facts fit the expectation within the limits of permissible error." (R Willem van Bemmelen, "The Scientific Character of Geology", The Journal of Geology Vol 69 (4), 1961)

"Statistics provides a quantitative example of the scientific process usually described qualitatively by saying that scientists observe nature, study the measurements, postulate models to predict new measurements, and validate the model by the success of prediction." (Marshall J Walker, "The Nature of Scientific Thought", 1963)

"A model […] is a story with a specified structure: to explain this catch phrase is to explain what a model is. The structure is given by the logical and mathematical form of a set of postulates, the assumptions of the model. The structure forms an uninterpreted system, in much the way the postulates of a pure geometry are now commonly regarded as doing. The theorems that follow from the postulates tell us things about the structure that may not be apparent from an examination of the postulates alone." (Allan Gibbard & Hal R. Varian, "Economic Models", The Journal of Philosophy, Vol. 75, No. 11, 1978)

"A law explains a set of observations; a theory explains a set of laws. […] Unlike laws, theories often postulate unobservable objects as part of their explanatory mechanism." (John L Casti, "Searching for Certainty", 1990)

"In order to understand how mathematics is applied to understanding of the real world it is convenient to subdivide it into the following three modes of functioning: model, theory, metaphor. A mathematical model describes a certain range of phenomena qualitatively or quantitatively. […] A (mathematical) metaphor, when it aspires to be a cognitive tool, postulates that some complex range of phenomena might be compared to a mathematical construction." (Yuri I Manin," Mathematics as Metaphor: Selected Essays of Yuri I. Manin" , 2007)

"Mental models represent possibilities, and the theory of mental models postulates three systems of mental processes underlying inference: (0) the construction of an intensional representation of a premise’s meaning – a process guided by a parser; (1) the building of an initial mental model from the intension, and the drawing of a conclusion based on heuristics and the model; and (2) on some occasions, the search for alternative models, such as a counterexample in which the conclusion is false. System 0 is linguistic, and it may be autonomous. System 1 is rapid and prone to systematic errors, because it makes no use of a working memory for intermediate results. System 2 has access to working memory, and so it can carry out recursive processes, such as the construction of alternative models." (Sangeet Khemlania & P.N. Johnson-Laird, "The processes of inference", Argument and Computation, 2012)

Related Posts Plugin for WordPress, Blogger...