Kumo project proposes a network crawler, that gathers information about the peers in the network and their behaviour, enabling the detection of changes in these behaviours.
Find out more about the project and its involvement in ONTOCHAIN in this interview with team member Leonardo Bautista-Gomez.
Could you please introduce yourself?
My name is Leonardo Bautista-Gomez, I am a Senior Researcher at the Barcelona Supercomputing Center, where I lead a group of researchers and engineers working on high performance computing, machine learning and blockchain technology.
How did you hear about ONTOCHAIN?
I heard about ONTOCHAIN on a LinkedIn post talking about a European H2020 Blockchain project aiming to create a blockchain ecosystem. The main objective of the project was to expand the Internet of Trust, by creating ontology systems that would help users to demonstrate and verify the validity of data shared on the web.
What motivated you to apply?
When I saw the project, I thought the objectives of the project aligned very well with the aims of blockchain technology and with the original reason that got me interested on blockchain applications. It is important to develop decentralized reputation systems and semantic web technologies to guarantee that the content we find online is trustworthy. Therefore, applying for the project was a no-brainer.
How was the application process?
The call announcement was done on November 2020. At that moment we started evaluating how our project could fit in this call. There are 6 technical topics that are covered by the ONTOCHAIN project: Applications, Semantic Interoperability, On-chain data management, Off-chain knowledge management, Ecosystem economy and Ecosystem scalability and integration. Our project fitted well in the scalability category, as it has to do with the novel Ethereum2 technology, including Proof-of-Stake and Sharding.
Can you briefly explain your project and its contribution to the ONTOCHAIN software ecosystem?
Our project Kumo aims to create a system that can deliver trustworthy information about the Ethereum2 chain and network. To achieve that goal, we plan to build a network crawler capable to connect with thousands of Ethereum2 nodes and gather statistical data about the network activity, as well as data about nodes performance. In addition to the data gathering tool that we are developing, we also plan to build an analysis tool that can easily extract information out of the data obtained. Finally, we will deliver this summarized information online and we will also develop a programming interface to interact with the data.
The ONTOCHAIN project is based on a co-development process, how can you benefit from an experience like this? And what type of synergies are you eager to explore with the other selected teams?
There are multiple projects in the 6 different categories, which is great to establish communications with different levels in the software layer, as well as with respect to the readiness levels of the technology. This will allow us to explore how other projects can use our services, as well as how we could benefit from other projects related to trustworthy content on the web. There are projects about data analytics, as well as decentralized identity, that could really boost the use cases of the technology we are developing. On the other hand, given the imminent arrival of Ethereum2, we believe our service should be extremely useful for anyone planning to deploy their applications on Ethereu2.
What are your expectations regarding the new software ecosystem that ONTOCHAIN will deliver, its contribution to the NGI priority areas, and benefits for end users?
We expect this to be a demonstrator on how to build a more trustworthy content on the Internet. The cornerstone technology allowing all this leap forward is blockchain and in order to this effort to be fruitful, it is necessary to have a whole ecosystem made of different components that can work together. Isolated products won’t be able to generate the impulse required to transform the Next Generation Internet into a trustworthy environment, where facts carry a stronger weight over fake data. Thus, it is imperative to offer users the tools and means to work and manipulate semantically tagged datasets and ontologies that can be verified and an easy and independent fashion. Achieving this will greatly benefit the new generation of Internet users.