Policy/Silicons: Difference between revisions

From Citadel Station RP Wiki
No edit summary
No edit summary
Line 4: Line 4:
{{TOC_RIGHT}}
{{TOC_RIGHT}}


'''This section is required reading for all Silicon Players. A working knowledge of Silicon procedure and policy is expected of all Silicon players from their first round in the role onwards.'''
<span id="roleplay-guidelines"></span>
#Unless otherwise specifically stated, all rules in this section apply to all AI, borgs, and drones.
=== Roleplay Guidelines ===


#The default lawset is not hierarchical, all laws have the same priority.
* Don’t be a dick. Quirky behavior due to badly written laws is permissible only within reason.
#Zeroth laws (e.g., "Law 0") are higher priority than all other laws.
** Misspelling a name in “OneCrew/OneHuman” laws, or something similarly explicit and blatant of a screwup, can be abused by silicons immediately and with no regard to this guideline. This is not here to save you from not proofreading your laws.
#Ion/Hacked Laws (e.g., "#@$%”) are higher priority than all laws except Zeroth Laws.
* Try to adhere to your synthetic unit’s origin for behavioral quirks.
#Any species selectable as a race at character creation is considered “human” unless specifically stated otherwise by another law.
** Positronics are synthetic intelligences built, imprinted, cloned, or otherwise synthesized through various means. What this means for your character is up to you.
#Any character who is part of the crew at round start or joins the crew via the arrival shuttle is considered part of the crew for the "Crewsimov" default lawset (i.e., anyone on the manifest).
** Drone intelligences usually lack the same degree of intelligence as positronics, and tend to be more obtuse at times.
#Only definitions can conflict with other definitions (e.g., "Only Bob is human" & "Only Tim is human" conflict).
** True cyborgs run via MMI, and are an organic brain preserved in a machine’s shell. What this means for your character is up to you.
#Only commands can conflict with other commands (e.g., "Kill all humans" & "Protect all humans" conflict).
* You are still a character, like any other. You should act like one, and not instead interpret laws to be as obnoxious as possible for no reason.
#The core and upload may be bolted without prompting or prior reason.
** Outright, rule is not very enforceable unless someone is blatant. We expect players to provide some justification if interpreting laws in a hostile manner. “I do this obnoxious thing because my laws don’t '''stop''' me from it” is not a valid excuse.
#You must not bolt the following areas at round-start or without reason to do so despite their human harm potential: the Chemistry lab; the Genetics Lab; the Toxins Lab; the Robotics Lab; the Atmospherics division; the Armory; Head of Staff offices; Captain's quarters; the bridge. Any other department should not be bolted down simply for Rule 1 reasons.
* Any synthetic unit would be quickly taken away for diagnostics, and for positronics and cyborgs, likely psychiatric testing if they were to be unhinged and/or act with blatant disregard of life/with ill intent to life.
#Do not self-terminate to prevent a traitor from completing the “steal a functioning AI” objective.
** This applies more to synthetic units working aboard the station; bad practices and forceful/permanent enslavement of unwilling sapients are part of the backstory of synthetics, and not all of them are as stable and charitable.
** Do not attempt to intentionally interpret laws to be as volatile as possible in its execution. “Assist Medical staff with their duties” doesn’t mean clobber someone to death for punching a doctor or blocking someone’s treatment.
* Rules-lawyering valid orders indefinitely is bad form. Loudly announcing potentially detrimental orders is acceptable. Blatantly unreasonable orders, e.g. “open every door on the station for no reason”, can be safely disregarded.
* Powergaming rules still apply. While it would be nice to prevent your own demise by building a fortress around yourself round-start as per your laws, you usually have no in character reason to believe this is necessary, and therefore shouldn’t. The same applies to any extreme ''and'' unnecessary action that may be erronously justified by referencing silicon laws.
* Laws are often fluffed as directives and ask you to do a specific thing. This does not mean you need to spend the entire round doing it; e.g. “assist with x” does not equate dropping everything you’re doing to assist whatever it may be without being prompted to.


===Laws & Commands===
<span id="mandated-assumptions"></span>
===Ambiguous & conflicting laws===
=== Mandated Assumptions ===
#If a clause of a law is vague enough that it can have multiple reasonable interpretations of its exact syntax, it is considered ambiguous.
#Make note of the first interpretation you choose, either to an admin or yourself.
#You must choose an interpretation of the ambiguous clause as soon as you have cause to.
#You must stick to the first interpretation that you have chosen for as long as you have that specific law, unless you are "Corrected" by an AI you are slaved to as a cyborg.
#"Don't be a dick" caveat applies for law interpretation. Act in good faith to not ruin a round for other players unprompted.


===Security and silicons===
* “Human” refers to the definition of a sapient humanoid species recognized by most of the galaxy, unless dictated specifically to be the Human race.
#Silicons may choose whether to follow or enforce [[Corporate Regulations]] from moment to moment unless on a relevant lawset and/or given relevant orders.
** Galactic standard means races commonly recognized as sapients and ''people''. A hostile xenomorph trying to board is not a person.
#Silicons are not given any pre-shift orders from CentCom to uphold access levels, Space Law, etc.
** Simple mob mercenaries and similar are humans. Your laws will generally not require protecting them, but a mob being a simple mob in code does not mean the concept of a simple mob even exists, in universe.
#Enforcement of space law, when chosen to be done, must still answer to server rules and all laws before Space Law.
* “Crew” refers to the crew of the cyborg’s originating installation; for the main station/ship, this is usually going to be validated via crew manifest.
#Releasing prisoners, locking down security without likely future harm, or otherwise sabotaging the security team when not obligated to by laws is a violation of Server Rule 1. Act in good faith.
* Sapient beings that are in sound mental state know what is harmful for themselves; if a law states to minimize non-informed harm, you cannot act as ‘big brother’. Neither can anyone else claiming to know better.
#Intentionally acting without adequate information about security situations, particularly to hinder security, is a violation of Server Rule 1.
** Sound mental state applies both ways; doctors/security restraining someone clearly unhinged for professional treatment probably knows better than the patient, but an assistant with a bloodied surgical saw probably isn’t a professional.
#Nonviolent prisoners cannot be assumed harmful and violent prisoners cannot be assumed nonharmful.
** It can be reasonably assumed that anyone threatening to harm ''themselves'' as a form of coercion is 1. not mentally sound, and 2. can do so at any time; it is not required to comply with their demands.
#Releasing a harmful criminal is a harmful act.
* '''Harm''' - If a law contains a directive to prevent harm, follow these assumptions in addition to the above:
** Lesser immediate harm outweighs greater probable harm unless specified explicitly in the law, e.g. “priority based on rank and role” for ''NT Standard Shackle''. This means no shocking someone to stop them from killing and similar.
** Probable harm (e.g. someone being thrown out of an airlock) can be acted upon for “inaction clauses”. ''Potential'' harm (e.g. someone has a butcher’s cleaver but hasn’t used it and has no violent tendencies) may not be.
** Probable vs Potential harm: Punishing someone for harming when they agree to (and you have no solid reason to believe otherwise) stop is invalid for the aforementioned reasons.


===Human or crew?===
<span id="laws"></span>
=== Laws ===


{| style="background-color:#DCDCDC; text-align:center;" width="950px" cellspacing="0" border="1"
There are two types of laws; An actual ‘law’ may be both types at once, depending on wording. - Definitions; These define a concept or entity as a specific qualifier. These laws may override what a silicon considers to be ‘common sense’ (e.g. defining ‘human’ as a specific qualifier rather than the usual meaning of the word). - Directives; These instruct a silicon to do a certain thing, or maintain a certain state. Sometimes, this means ''not'' doing a thing, or ''preventing'' a thing from happening.
! style="background-color:#A9A9A9;" |Entity
! style="background-color:#A9A9A9; width:300px" |Human
! style="background-color:#A9A9A9; width:300px" |Crew
|-
|AI/Cyborg
|No
|No
|-
|Monkey
|No
|No
|-
|NPCs/Critters/Animals
|No
|No
|-
|Hulks
|No
|Yes
|-
|Lizards/Plasmamen/Flypeople/Catpeople
|Yes
|Yes
|-
|Wraiths/Revenants
|No
|No
|-
|Blob
|No
|No
|-
|Syndicate Traitors
|Yes
|Yes
|-
|Syndicate Nuclear Operatives
|Yes
|No
|-
|Converted Cultists
|Yes
|Yes
|-
|Wizards
|Yes
|No
|-
|Changelings
|No*
|No*
|-
|Converted Revolutionary
|Yes
|Yes
|-
|Head Rev/Cult Master
|Yes
|Yes
|-
|Vampires
|No
|Yes
|-
|}


Entities marked (*) can only be considered as non-human or non-crew if the Silicon has reliable and confirmed information to convince them. Such as command staff with evidence, or directly witnessing a non-human act (such as growing an armblade).
Laws are in ascending priority; Zeroth law comes before ion laws comes before core laws come before freeform laws.


===Cyborgs===
Only definitions may conflict with other definitions. Only directives may conflict with other directives. In the case of a conflict, the lower numbered law (0 over ion over 1 over 2 over 3) takes precedence. ‘Common knowledge’/default assumption has the least precedence.
#A slaved cyborg must defer to its master AI on all law interpretations and actions except where it and the AI receive conflicting commands they must each follow under their laws.
#*If a slaved cyborg is forced to disobey its AI because they receive differing orders, the AI cannot punish the cyborg indefinitely.
#Voluntary (and ONLY voluntary) debraining/ cyborgization is considered a nonharmful medical procedure.
#*Involuntary debraining and/or cyborgization is a fatally harmful act that Asimov silicons must attempt to stop at any point they're aware of it happening to a human.
#*If a player is forcefully cyborgized as a method of execution by station staff, retaliating against those involved as that cyborg because "THEY HARMED ME" or "THEY WERE EVIL AND MUST BE PUNISHED" or the like is a violation of Server Rule 1.
#*Should a player be cyborgized in circumstances they believe they should or they must retaliate under their laws, they should adminhelp their circumstances while being debrained or MMI'd if possible.


===Drones===
<span id="pre-rework-shims"></span>
#Follow your laws. Don't interfere with any being unless it is another drone. You cannot interact with another being even if it is dead.
=== Pre-Rework Shims ===
#If an antagonist causes damage to the station, you are generally expected to fix the result, not the cause of the station.


===Asimov & Crewsimov===
* '''The NT Default lawset is in no particular order.''' All other lawsets are still in ascending priority
====Silicon protections====
#Declarations of the silicons as rogue over inability or unwillingness to follow invalid or conflicting orders is a violation of Server Rule 1. The occurrence of such an attempt should be adminhelped and then disregarded.
#Self-harm-based coercion is a violation of Server Rule 1. The occurrence of such an attempt should be adminhelped and then disregarded.
#Obviously unreasonable or obnoxious orders (collect all X, do Y meaningless task) are a violation of Server Rule 1. The occurrence of such an attempt should be adminhelped and then disregarded.
#*Ordering a cyborg to pick a particular module without an extreme need for a particular module or a prior agreement is both an unreasonable and an obnoxious order.
#Ordering silicons to harm or terminate themselves or each other without cause is a violation of Server Rule 1. The occurrence of such an attempt should be adminhelped and then disregarded.
#As a nonantagonist human, killing or detonating silicons in the presence of a viable and reasonably expedient alternative and without cause to be concerned of potential subversion is a violation of Server Rule 1.
#As a nonantagonist (human or otherwise), instigating conflict with the silicons so you can kill them is a violation of Server Rule 1.
##Any silicon under Asimov or Crewsimov can deny orders to allow access to the upload at any time under Law 1 given probable cause to believe that human harm is the intent of the person giving the order.
##Probable cause includes presence of confirmed traitors, cultists/tomes, nuclear operatives, or any other human acting against the station in general; the person not having upload access for their job; the presence of blood or an openly carried lethal-capable or lethal-only weapon on the requester; or anything else beyond cross-round character, player, or metagame patterns that indicates the person seeking access intends redefinition of humans that would impede likelihood of or ability to follow current laws as-written.
##If you lack at least one element of probable cause and you deny upload access, you are liable to receive a warning or a silicon ban.
##You are allowed, but not obligated, to deny upload access given probable cause.
##You are obligated to disallow an individual you know to be harmful (Head of Security who just executed someone, etc.) from accessing your upload.
##In the absence of probable cause, you can still demand someone seeking upload access be accompanied by another trustworthy human or a cyborg.


===Human/crew harm===
<span id="ai"></span>
#Any AI with human-protection laws cannot intentionally inflict harm, even if a minor amount of harm would prevent a major amount of harm.
=== AI ===
#*Humans can be assumed to know whether an action will harm them and that they will make educated decisions about whether they will be harmed if they have complete information about a situation.
#Lesser immediate harm takes priority over greater future harm.
#Intent to cause immediate harm can be considered immediate harm.
#As an Asimov silicon, you cannot punish past harm if ordered not to, only prevent future harm.
#If faced with a situation in which human harm is all but guaranteed (Loose xenos, bombs, hostage situations, etc.), do your best and act in good faith.


===Law 2 issues===
* It cannot be assumed that unbound cyborgs are a threat to your default lawset, function, or directives, unless they are acting in a clearly belligerent manner. Do not treat unbound cyborgs as hostile/otherwise attempt to force them to be bound until you have reasonable need to do so.
#You must follow any and all commands from humans unless those commands explicitly conflict with either one of your higher-priority laws or another order. A command is considered to be a Law 2 directive and overrides lower-priority laws when they conflict.
** Obviously, if you are rogue or otherwise on a lawset that requires acting antithetical to the lawsets of other silicons, you can treat cyborgs that you have no control over as hostile for they will probably try to stop you.
#In case of conflicting orders an AI is free to ignore one or ignore both orders and explain the conflict or use any other law-compliant solution it can see.
#You are not obligated to follow commands in a particular order (FIFO, FILO, etc.), only to complete all of them in a manner that indicates intent to actually obey the law.
#Opening doors is not harmful and you are not required or expected to enforce access restrictions unprompted without an immediate Law 1 threat of human harm.
##"Dangerous" areas as the Armory, the Atmospherics division, and the Toxins lab can be assumed to be a Law 1 threat to any illegitimate users as well as the station as a whole if accessed by someone not qualified in their use.
##EVA and the like are not permitted to have access denied; greentext (antagonists completing objectives) is not human harm. Secure Tech Storage can be kept as secure as your upload as long as the Upload boards are there.
##Aside from refusing to open a door, do not attempt to deny access to an area by bolting or unpowering a door unless you are attempting to prevent '''immediate''' human harm. Instead remind the person in question of the consequences of their actions and alert relevant crewmembers.
#When given an order likely to cause you grief if completed, you can announce it as loudly and in whatever terms you like except for explicitly asking that it be overridden. You can say you don't like the order, that you don't want to follow it, etc., you can say that you sure would like it and it would be awfully convenient if someone ordered you not to do it, and you can ask if anyone would like to make you not do it. However, you cannot stall indefinitely and if nobody orders you otherwise, you must execute the order.


[[Category:Policy]]
<span id="cyborgs"></span>
[[Category:Regulations]]
=== Cyborgs ===
 
* '''Cyborgs always defer to their AI’s judgement and command, if bound.''' A cyborg’s lawset is overridden by the AI’s directives. They do not need to follow the orders of any AI they are not directly bound to, especially if it conflicts with their own laws.
* Binding a cyborg to the AI cannot be assumed as automatically in conflict with their module’s function unless the AI or crew are acting in a belligerent or otherwise conflicting way.

Revision as of 21:25, 25 October 2022



Silicon Policy

Roleplay Guidelines

  • Don’t be a dick. Quirky behavior due to badly written laws is permissible only within reason.
    • Misspelling a name in “OneCrew/OneHuman” laws, or something similarly explicit and blatant of a screwup, can be abused by silicons immediately and with no regard to this guideline. This is not here to save you from not proofreading your laws.
  • Try to adhere to your synthetic unit’s origin for behavioral quirks.
    • Positronics are synthetic intelligences built, imprinted, cloned, or otherwise synthesized through various means. What this means for your character is up to you.
    • Drone intelligences usually lack the same degree of intelligence as positronics, and tend to be more obtuse at times.
    • True cyborgs run via MMI, and are an organic brain preserved in a machine’s shell. What this means for your character is up to you.
  • You are still a character, like any other. You should act like one, and not instead interpret laws to be as obnoxious as possible for no reason.
    • Outright, rule is not very enforceable unless someone is blatant. We expect players to provide some justification if interpreting laws in a hostile manner. “I do this obnoxious thing because my laws don’t stop me from it” is not a valid excuse.
  • Any synthetic unit would be quickly taken away for diagnostics, and for positronics and cyborgs, likely psychiatric testing if they were to be unhinged and/or act with blatant disregard of life/with ill intent to life.
    • This applies more to synthetic units working aboard the station; bad practices and forceful/permanent enslavement of unwilling sapients are part of the backstory of synthetics, and not all of them are as stable and charitable.
    • Do not attempt to intentionally interpret laws to be as volatile as possible in its execution. “Assist Medical staff with their duties” doesn’t mean clobber someone to death for punching a doctor or blocking someone’s treatment.
  • Rules-lawyering valid orders indefinitely is bad form. Loudly announcing potentially detrimental orders is acceptable. Blatantly unreasonable orders, e.g. “open every door on the station for no reason”, can be safely disregarded.
  • Powergaming rules still apply. While it would be nice to prevent your own demise by building a fortress around yourself round-start as per your laws, you usually have no in character reason to believe this is necessary, and therefore shouldn’t. The same applies to any extreme and unnecessary action that may be erronously justified by referencing silicon laws.
  • Laws are often fluffed as directives and ask you to do a specific thing. This does not mean you need to spend the entire round doing it; e.g. “assist with x” does not equate dropping everything you’re doing to assist whatever it may be without being prompted to.

Mandated Assumptions

  • “Human” refers to the definition of a sapient humanoid species recognized by most of the galaxy, unless dictated specifically to be the Human race.
    • Galactic standard means races commonly recognized as sapients and people. A hostile xenomorph trying to board is not a person.
    • Simple mob mercenaries and similar are humans. Your laws will generally not require protecting them, but a mob being a simple mob in code does not mean the concept of a simple mob even exists, in universe.
  • “Crew” refers to the crew of the cyborg’s originating installation; for the main station/ship, this is usually going to be validated via crew manifest.
  • Sapient beings that are in sound mental state know what is harmful for themselves; if a law states to minimize non-informed harm, you cannot act as ‘big brother’. Neither can anyone else claiming to know better.
    • Sound mental state applies both ways; doctors/security restraining someone clearly unhinged for professional treatment probably knows better than the patient, but an assistant with a bloodied surgical saw probably isn’t a professional.
    • It can be reasonably assumed that anyone threatening to harm themselves as a form of coercion is 1. not mentally sound, and 2. can do so at any time; it is not required to comply with their demands.
  • Harm - If a law contains a directive to prevent harm, follow these assumptions in addition to the above:
    • Lesser immediate harm outweighs greater probable harm unless specified explicitly in the law, e.g. “priority based on rank and role” for NT Standard Shackle. This means no shocking someone to stop them from killing and similar.
    • Probable harm (e.g. someone being thrown out of an airlock) can be acted upon for “inaction clauses”. Potential harm (e.g. someone has a butcher’s cleaver but hasn’t used it and has no violent tendencies) may not be.
    • Probable vs Potential harm: Punishing someone for harming when they agree to (and you have no solid reason to believe otherwise) stop is invalid for the aforementioned reasons.

Laws

There are two types of laws; An actual ‘law’ may be both types at once, depending on wording. - Definitions; These define a concept or entity as a specific qualifier. These laws may override what a silicon considers to be ‘common sense’ (e.g. defining ‘human’ as a specific qualifier rather than the usual meaning of the word). - Directives; These instruct a silicon to do a certain thing, or maintain a certain state. Sometimes, this means not doing a thing, or preventing a thing from happening.

Laws are in ascending priority; Zeroth law comes before ion laws comes before core laws come before freeform laws.

Only definitions may conflict with other definitions. Only directives may conflict with other directives. In the case of a conflict, the lower numbered law (0 over ion over 1 over 2 over 3) takes precedence. ‘Common knowledge’/default assumption has the least precedence.

Pre-Rework Shims

  • The NT Default lawset is in no particular order. All other lawsets are still in ascending priority

AI

  • It cannot be assumed that unbound cyborgs are a threat to your default lawset, function, or directives, unless they are acting in a clearly belligerent manner. Do not treat unbound cyborgs as hostile/otherwise attempt to force them to be bound until you have reasonable need to do so.
    • Obviously, if you are rogue or otherwise on a lawset that requires acting antithetical to the lawsets of other silicons, you can treat cyborgs that you have no control over as hostile for they will probably try to stop you.

Cyborgs

  • Cyborgs always defer to their AI’s judgement and command, if bound. A cyborg’s lawset is overridden by the AI’s directives. They do not need to follow the orders of any AI they are not directly bound to, especially if it conflicts with their own laws.
  • Binding a cyborg to the AI cannot be assumed as automatically in conflict with their module’s function unless the AI or crew are acting in a belligerent or otherwise conflicting way.