← Back

Large Knowledge Models

Knowledge models that can facilitate reasoning by synthesizing and clarifying relevant information transparently from multiple domains. “Provide a semantic medium that is both more expressive and more computationally tractable than natural language, a medium able to support formal and informal reasoning, human and inter-agent communication, and the development of scalable quasilinguistic corpora with characteristics of both [scientific] literatures and associative memory”. LLMs have powerful capabilities but their knowledge is opaque,  not cumulative, and not easily updatable and comparable. Human society has not only individual brains with memory but a cumulative scholarship to grow knowledge, compare alternative views, etc. Such AI could be used as a collective strategic assistant. 

R&D Gaps (1)

The volume of scientific publications is overwhelming, making it difficult for humans to read, comprehend, and synthesize the entire body of literature. How can AI-generated knowledge become cumulative? What should a machine-human shared Wikipedia look like? We should collect and synthesize all the ...