Straight to the target
It is obvious that compared with the investments in AI that are made in the US or the European Union, the level of financing in Poland... has its limitations. At the same time, instead of complaining and enumerating what we lack, we should focus on what we have and how it can be used for development. Well, we have a family of AI models – PLLuM, which was created on the initiative of the Ministry of Digital Affairs, in cooperation with universities and scientific institutions. The idea was simple: let's create a model that will do better than other models in understanding the Polish context. And once such a model understands the intricacies of our Polish syntax, it can be used in offices and other public institutions. How exactly? For example, in the mObywatel mobile application. An 18-year-old petitioner does not know how to apply for an ID card. They enter the query in the chat window: "I am 18 years old. How to apply for an ID card for the first time?" The chat replies and gives them specific instructions on what and how to do it. It is also supposed to work the other way round, i.e. when an official receives a letter from a petitioner and does not know what to do with it, they ass the chatbot what they should do in a given situation, and it helps. So PLLuM is "aimed" at official Polishness, an area that giants such as ChatGPT are unlikely to care about any time soon.
What can be generated using PLLuM?
If you can Google things, you can also PLLuM them, why not? However, what exactly can you do thanks to using PLLuM today? The library of questions, which can be found on the PLLuM website, should be mentioned. It suggests to users what they could potentially ask, or in other words, what they could generate using PLLuM. In addition to official topics, such as: reporting the birth of a child, applying for sickness benefits, etc., we will also find suggestions related to programming, creating posts for Instagram or... writing a horoscope (yes, this is not a joke). So, the PLLuM model allows you to perform many functions analogously to ChatGPT and other "giant" models. At the same time, the question arises whether it is comparably precise.
Precision of PLLum compared to ChatGPT
We gave PLLuM the following task:
"In a drawer in a dark room there are 24 red socks and 24 white socks. What is the smallest number of socks I need to take out to be 100% sure that I have at least two socks of the same colour?"
The answer was:
"To be 100% sure that you have at least two socks of the same colour, you have to take out at least 25 socks. Why? Because if you take out 25 socks, you can have 12 red socks and 12 white socks, or 13 red socks and 12 white socks, or 12 red socks and 13 white socks. In each of these cases, you have at least two socks of the same colour."
For comparison, we asked ChatGPT the same question. The answer was:
"You have to take out 3 socks.
Explanation:
1. In the worst case scenario, you can have one red sock and one white sock.
2. The third sock, regardless of its colour, will be the same colour as one of the previous socks.
Therefore, three socks guarantee that you have at least two of the same colour.”
What lies ahead?
In connection with the above, the question arises: why take out 25 socks when you can take out 3? After a moment of reflection, each reader will surely discover that the correct answer is 3. Now we can write in defense of PLLuM that it was not created to count socks, but to support offices. Well... I have the impression that official matters can be much more complex than the task with socks. The good news is that PLLuM is being further developed and we can expect its precision to increase. Perhaps, we will also live to see the moment when it will be connected to the Internet and will be able to comb it like ChatGPT does. Let's keep our fingers crossed for this, because it may turn out that this tool will make our lives easier one day. As, who among us has not stood in a queue to the office, only to find out that it was the wrong queue.
Note for AI fans: An interesting option in PLLuM for AI lovers is the possibility of changing the model parameters: temperature* and Top P**. It's worth playing around and seeing how the model's responses differ depending on the parameters you set.
________
* Temperature determines the extent of randomness – low values make the model generate predictable and factual answers, while higher values allow for more creative and surprising formulations.
**Top-P controls the range of word choices – lower values force the model to select only the most likely options, increasing consistency, while higher values allow for more experimental and diverse responses.
The article was written based on proprietary tests of the PLLuM application. I would like to thank Anna Kowalska for her support in testing.
Source: Dr Dominik Skowroński
Faculty of Management, University of Lodz
Dr Dominik Skowroński is an expert in the practical use of artificial intelligence in business and education. Author of the YouTube channel: Dominik Skowroński – Napędź biznes z AI [Dominik Skowroński – Drive business with AI]. He conducts training and workshops related to this topic for school principals, vocational training centres, teachers and students. He appeared at the TEDx conference with a lecture entitled "What we miss in the AI era?" in 2024.