This article presents a system that carries out highly effective searches over collections of textual information, such as those found on the Internet. The system is made up of two major parts. The first part consists of an agent, Musag, that learns to relate concepts that are semantically ''similar'' to one another. In other words, this agent dynamically builds a dictionary of expressions for a given concept that captures the words people have in mind when mentioning the specific concept. We aim at achieving this by learning from the context in which these words appear. The second part consists of another agent, Sag, which is responsible for retrieving documents, given a set of keywords with relative weights. This retrieval makes use of the dictionary learned by Musag, in the sense that the documents to be retrieved for a query are related to the concept given according to the context of previously scanned documents. In this way, we overcome two main problems with current text search engines, which are largely based on syntactic methods. One problem is that the keyword given in the query might have ambiguous meaning, leading to the retrieval of documents not related to the topic requested. The second problem concerns relevant documents that will not be recommended to the user, since they did not include the specific keyword mentioned in the query. Using context learning methods, we will be able to retrieve such documents if they include other words, learned by Musag, that are related to the main concept. We describe the agents'system architecture, along with the nature of their interactions. We describe our learning and search algorithms and present results from experiments performed on specific concepts. We also discuss the notion of ''cost of learning'' and how it influences the learning process and the quality of the dictionary at any given time.