Humans have developed a great variety of complex communicative systems (languages) without any centralized assistance. Therefore, evolution of human communication has often been modeled as a result of distributed learning among agents which are reinforced for successfully transmitting information to each other. These models, however, face two major challenges: 1) even in most successful cases, the agents can only develop a very small number of communicative conventions, whereas humans managed to successfully agree upon thousands of words; 2) after groups of artificial agents converge on a set of communicative conventions, they have no incentive to improve or expand it, whereas the development of human languages is open-ended. Here, I show how these two challenges could be resolved by dynamically changing the problem that the agents are learning to solve with communication. I suggest that the communicative problem that starts small and gradually increases in difficulty as the agents agree upon new communicative conventions is essential for achieving tractable evolution of rich communicative systems in decentralized multi-agent communities.