Please use this identifier to cite or link to this item:
|Title:||Deep and shallow models in medical expert systems||Authors:||Washbrook, John
|Keywords:||Computer science;Artificial intelligence;Medicine;Expert systems (Computer science)||Issue Date:||1989||Publisher:||Elsevier||Source:||Artificial intelligence in medicine, 1989, Volume 1, Issue 1, Pages 11–28||Abstract:||In the context of medical expert systems a deep system is often used synonymously with a system that models some kind of causal process or function. We argue that although causality might be necessary for a deep system it is not sufficient on its own. A deep system must manifest the expectations of its user regarding its flexibility as a problem solver and its human-computer interaction (dialogue structure and explanation structure). These manifestations are essential for the acceptability of medical expert systems by their users. We illustrate our argument by evaluating a representative sample of medical expert systems. The systems are evaluated from the perspective of how explicitly they incorporate their particular models of expertise and how understandably they progress towards solutions. The dialogue and explanation structures of these systems are also evaluated. The results of our analysis show that there is no strong correlation between causality and acceptability. On the basis of this we propose that a deep system is one that properly explicates its underlying model of human expertise||URI:||http://ktisis.cut.ac.cy/handle/10488/7060||ISSN:||0933-3657||DOI:||http://dx.doi.org/10.1016/0933-3657(89)90013-4||Rights:||© 1989 Published by Elsevier B.V.||Type:||Article|
|Appears in Collections:||Άρθρα/Articles|
Show full item record
checked on Feb 13, 2018
checked on Aug 22, 2019
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.