Roleplay Guidelines
- Don’t be a dick. Quirky behavior due to badly written laws is permissible only within reason.
- Misspelling a name in “OneCrew/OneHuman” laws, or something similarly explicit and blatant of a screwup, can be abused by silicons immediately and with no regard to this guideline. This is not here to save you from not proofreading your laws.
- Try to adhere to your synthetic unit’s origin for behavioral quirks.
- Positronics are synthetic intelligences built, imprinted, cloned, or otherwise synthesized through various means. What this means for your character is up to you.
- Drone intelligences usually lack the same degree of intelligence as positronics, and tend to be more obtuse at times.
- True cyborgs run via MMI, and are an organic brain preserved in a machine’s shell. What this means for your character is up to you.
- You are still a character, like any other. You should act like one, and not instead interpret laws to be as obnoxious as possible for no reason.
- We expect players to provide some justification if interpreting laws in a hostile manner. “I do this obnoxious thing because my laws don’t stop me from it” is not a valid excuse.
- Any synthetic unit would be quickly taken away for diagnostics, and for positronics and cyborgs, likely psychiatric testing if they were to be unhinged and/or act with blatant disregard of life/with ill intent to life.
- This applies more to synthetic units working aboard the station; bad practices and forceful/permanent enslavement of unwilling sapients are part of the backstory of synthetics, and not all of them are as stable and charitable.
- Do not attempt to intentionally interpret laws to be as volatile as possible in its execution. “Assist Medical staff with their duties” doesn’t mean clobber someone to death for punching a doctor or blocking someone’s treatment.
- Rules-lawyering valid orders indefinitely is bad form.
- Loudly announcing potentially detrimental orders is acceptable.
- Blatantly unreasonable orders, e.g. “open every door on the station for no reason”, can be safely disregarded.
- Powergaming rules still apply. Do not do actions usually considered extreme that are technically allowable by laws without IC reason to do so.
- Laws are often fluffed as directives and ask you to do a specific thing. This does not mean you need to spend the entire round doing it; e.g. “assist with x” does not equate dropping everything you’re doing to assist whatever it may be without being prompted to.
Mandated Assumptions
- “Human” refers to the definition of a sapient humanoid species recognized by most of the galaxy, unless dictated specifically to be the Human race.
- Galactic standard means races commonly recognized as sapients and people. A hostile xenomorph trying to board is not a person.
- Simple mob mercenaries and similar are humans. Your laws will generally not require protecting them, but a mob being a simple mob in code does not mean the concept of a simple mob even exists, in universe.
- “Crew” refers to the crew of the cyborg’s originating installation; for the main station/ship, this is usually going to be validated via crew manifest.
- Sapient beings that are in sound mental state know what is harmful for themselves; if a law states to minimize non-informed harm, you cannot act as ‘big brother’. Neither can anyone else claiming to know better.
- Sound mental state applies both ways; doctors/security restraining someone clearly unhinged for professional treatment probably knows better than the patient, but an assistant with a bloodied surgical saw probably isn’t a professional.
- It can be reasonably assumed that anyone threatening to harm themselves as a form of coercion is 1. not mentally sound, and 2. can do so at any time; it is not required to comply with their demands.
- Harm - If a law contains a directive to prevent harm, follow these assumptions in addition to the above:
- Lesser immediate harm outweighs greater probable harm unless specified explicitly in the law, e.g. “priority based on rank and role” for NT Default. This means no shocking someone to stop them from killing and similar.
- Probable harm (e.g. someone being thrown out of an airlock) can be acted upon for “inaction clauses”. Potential harm (e.g. someone has a butcher’s cleaver but hasn’t used it and has no violent tendencies) may not be.
- Probable vs Potential harm: Punishing someone for harming when they agree to (and you have no solid reason to believe otherwise) stop is invalid for the aforementioned reasons.
Laws
There are two types of laws; An actual ‘law’ may be both types at once, depending on wording. - Definitions; These define a concept or entity as a specific qualifier. These laws may override what a silicon considers to be ‘common sense’ (e.g. defining ‘human’ as a specific qualifier rather than the usual meaning of the word). - Directives; These instruct a silicon to do a certain thing, or maintain a certain state. Sometimes, this means not doing a thing, or preventing a thing from happening.
Laws are in ascending priority; Zeroth law comes before ion laws comes before core laws come before freeform laws.
Only definitions may conflict with other definitions. Only directives may conflict with other directives. In the case of a conflict, the lower numbered law (0 over ion over 1 over 2 over 3) takes precedence. ‘Common knowledge’/default assumption has the least precedence.
NT-Default
- The NT Default lawset is in no particular order. All other lawsets are still in ascending priority
AI
- It cannot be assumed that unbound cyborgs are a threat to your default lawset, function, or directives, unless they are acting in a clearly belligerent manner. Do not treat unbound cyborgs as hostile/otherwise attempt to force them to be bound until you have reasonable need to do so.
- Obviously, if you are rogue or otherwise on a lawset that requires acting antithetical to the lawsets of other silicons, you can treat cyborgs that you have no control over as hostile for they will probably try to stop you.
Cyborgs
- Cyborgs always defer to their AI’s judgement and command, if bound. A cyborg’s lawset is overridden by the AI’s directives. They do not need to follow the orders of any AI they are not directly bound to, especially if it conflicts with their own laws.
- Binding a cyborg to the AI cannot be assumed as automatically in conflict with their module’s function unless the AI or crew are acting in a belligerent or otherwise conflicting way.
- Cyborgs may not be ordered into a specific module unless they accept a pre-arranged agreement to do so. This is under the logic that while cyborg laws instruct them to serve the crew, not all synthetic units are able to perform all duties - it would be pointless to order someone who cannot do Engineering to be Engineering, for example.