An AI/Dl based General Action Machine which can learn more, and grow up
Author : @Ravinder Payal
Tweet Follow @RAVINDERPAYAL
Draft Post written by Ravinder Payal
- Exploring the world without hurting the space and other beings around where it’s residing
- Using the least possible resources for doing things
- *thinking more about this*
- Learning more,
- Doing actions which don’t directly hurt anyone
- Acting according to assigned or taken responsibilities
- Choosing one work or responsibility which has least collateral damage in a situation of conflict.
- Getting rid of responsibilities which makes the machine do things against its fundamental aim.
- Not doing what it’s supposed to do based on assigned/taken responsibility
- Working against fundamental aims
- Learning skills
- Helping other beings around with already learned skills
- Earning basic-living expenses(certain time after booting up)
- Understanding social norms
How it works:
With responsibilities come work, and with work come tasks. Tasks make you decide something and do actions. Decisions and actions cause certain comfortability and un-comfortability, based on which machine decides to change it’s decision or act further, and remember that this is also an action originated from the previous action. In this way there becomes a linked chain of actions.
Details about decisions(related to choosing actions only) and comfortability levels:
- The comfortability level ranges from -100 to 100(in machine represented as 0-200), and machine boots at level 0.
- every action has a value belonging to [-100, 100].
shall not decrease the comfortability level by more than 33%(Percentage is calculated based on un-comfortability divided by available comfortability)
- Aim is hardcoded and the machine can’t get rid of this unless it’s altered by external means.
Uncomfortabilities: There is a level till when the machine can do things it’s uncomfortable at, and with every uncomfortable thing it does the level decreases, and if it keeps on decreasing it dies just like human beings.
Actions are represented as a tree in machine memory, every new action is fruit or stem of a responsibility for past action.
Action or decision leading to it is tagged uncomfortable not only based on the outcome of it but the outcomes of the next 5 possible actions(calculated based on currently available inferences and predicted data of future while time period being counted based on effect period of current decision if executed) going to originate from this.
- Taken actions can force the machine to take additional responsibilities as well, and the machine can be penalized by society on abandoning the responsibilities along with internal penalization by machine’s STAY GOOD and BE ON AIM system.
–> It’s draft post, so I welcome all kind of criticism, comments, opinions, and doubts <–
Tweet Follow @RAVINDERPAYAL