yuzu/src/core/arm/arm_interface.h

226 lines
7.0 KiB
C++
Raw Normal View History

chore: make yuzu REUSE compliant [REUSE] is a specification that aims at making file copyright information consistent, so that it can be both human and machine readable. It basically requires that all files have a header containing copyright and licensing information. When this isn't possible, like when dealing with binary assets, generated files or embedded third-party dependencies, it is permitted to insert copyright information in the `.reuse/dep5` file. Oh, and it also requires that all the licenses used in the project are present in the `LICENSES` folder, that's why the diff is so huge. This can be done automatically with `reuse download --all`. The `reuse` tool also contains a handy subcommand that analyzes the project and tells whether or not the project is (still) compliant, `reuse lint`. Following REUSE has a few advantages over the current approach: - Copyright information is easy to access for users / downstream - Files like `dist/license.md` do not need to exist anymore, as `.reuse/dep5` is used instead - `reuse lint` makes it easy to ensure that copyright information of files like binary assets / images is always accurate and up to date To add copyright information of files that didn't have it I looked up who committed what and when, for each file. As yuzu contributors do not have to sign a CLA or similar I couldn't assume that copyright ownership was of the "yuzu Emulator Project", so I used the name and/or email of the commit author instead. [REUSE]: https://reuse.software Follow-up to 01cf05bc75b1e47beb08937439f3ed9339e7b254
2022-05-15 03:06:02 +03:00
// SPDX-FileCopyrightText: 2014 Citra Emulator Project
// SPDX-License-Identifier: GPL-2.0-or-later
#pragma once
2018-01-10 00:33:46 +03:00
#include <array>
#include <span>
#include <vector>
#include <dynarmic/interface/halt_reason.h>
#include "common/common_funcs.h"
#include "common/common_types.h"
2020-02-29 20:58:50 +03:00
#include "core/hardware_properties.h"
core/cpu_core_manager: Create threads separately from initialization. Our initialization process is a little wonky than one would expect when it comes to code flow. We initialize the CPU last, as opposed to hardware, where the CPU obviously needs to be first, otherwise nothing else would work, and we have code that adds checks to get around this. For example, in the page table setting code, we check to see if the system is turned on before we even notify the CPU instances of a page table switch. This results in dead code (at the moment), because the only time a page table switch will occur is when the system is *not* running, preventing the emulated CPU instances from being notified of a page table switch in a convenient manner (technically the code path could be taken, but we don't emulate the process creation svc handlers yet). This moves the threads creation into its own member function of the core manager and restores a little order (and predictability) to our initialization process. Previously, in the multi-threaded cases, we'd kick off several threads before even the main kernel process was created and ready to execute (gross!). Now the initialization process is like so: Initialization: 1. Timers 2. CPU 3. Kernel 4. Filesystem stuff (kind of gross, but can be amended trivially) 5. Applet stuff (ditto in terms of being kind of gross) 6. Main process (will be moved into the loading step in a following change) 7. Telemetry (this should be initialized last in the future). 8. Services (4 and 5 should ideally be alongside this). 9. GDB (gross. Uses namespace scope state. Needs to be refactored into a class or booted altogether). 10. Renderer 11. GPU (will also have its threads created in a separate step in a following change). Which... isn't *ideal* per-se, however getting rid of the wonky intertwining of CPU state initialization out of this mix gets rid of most of the footguns when it comes to our initialization process.
2019-04-09 20:25:54 +03:00
namespace Common {
struct PageTable;
}
namespace Kernel {
enum class VMAPermission : u8;
enum class DebugWatchpointType : u8;
struct DebugWatchpoint;
} // namespace Kernel
namespace Core {
class System;
class CPUInterruptHandler;
using WatchpointArray = std::array<Kernel::DebugWatchpoint, Core::Hardware::NUM_WATCHPOINTS>;
2020-02-29 20:58:50 +03:00
/// Generic ARMv8 CPU interface
class ARM_Interface {
public:
YUZU_NON_COPYABLE(ARM_Interface);
YUZU_NON_MOVEABLE(ARM_Interface);
2022-07-08 03:06:46 +03:00
explicit ARM_Interface(System& system_, bool uses_wall_clock_)
: system{system_}, uses_wall_clock{uses_wall_clock_} {}
virtual ~ARM_Interface() = default;
struct ThreadContext32 {
std::array<u32, 16> cpu_registers{};
std::array<u32, 64> extension_registers{};
u32 cpsr{};
u32 fpscr{};
u32 fpexc{};
u32 tpidr{};
};
// Internally within the kernel, it expects the AArch32 version of the
// thread context to be 344 bytes in size.
static_assert(sizeof(ThreadContext32) == 0x150);
struct ThreadContext64 {
std::array<u64, 31> cpu_registers{};
u64 sp{};
u64 pc{};
u32 pstate{};
std::array<u8, 4> padding{};
std::array<u128, 32> vector_registers{};
u32 fpcr{};
u32 fpsr{};
u64 tpidr{};
};
// Internally within the kernel, it expects the AArch64 version of the
// thread context to be 800 bytes in size.
static_assert(sizeof(ThreadContext64) == 0x320);
2018-02-14 20:47:48 +03:00
/// Runs the CPU until an event happens
void Run();
/// Clear all instruction cache
virtual void ClearInstructionCache() = 0;
/**
* Clear instruction cache range
* @param addr Start address of the cache range to clear
* @param size Size of the cache range to clear, starting at addr
*/
virtual void InvalidateCacheRange(VAddr addr, std::size_t size) = 0;
/**
* Notifies CPU emulation that the current page table has changed.
* @param new_page_table The new page table.
* @param new_address_space_size_in_bits The new usable size of the address space in bits.
* This can be either 32, 36, or 39 on official software.
*/
core/cpu_core_manager: Create threads separately from initialization. Our initialization process is a little wonky than one would expect when it comes to code flow. We initialize the CPU last, as opposed to hardware, where the CPU obviously needs to be first, otherwise nothing else would work, and we have code that adds checks to get around this. For example, in the page table setting code, we check to see if the system is turned on before we even notify the CPU instances of a page table switch. This results in dead code (at the moment), because the only time a page table switch will occur is when the system is *not* running, preventing the emulated CPU instances from being notified of a page table switch in a convenient manner (technically the code path could be taken, but we don't emulate the process creation svc handlers yet). This moves the threads creation into its own member function of the core manager and restores a little order (and predictability) to our initialization process. Previously, in the multi-threaded cases, we'd kick off several threads before even the main kernel process was created and ready to execute (gross!). Now the initialization process is like so: Initialization: 1. Timers 2. CPU 3. Kernel 4. Filesystem stuff (kind of gross, but can be amended trivially) 5. Applet stuff (ditto in terms of being kind of gross) 6. Main process (will be moved into the loading step in a following change) 7. Telemetry (this should be initialized last in the future). 8. Services (4 and 5 should ideally be alongside this). 9. GDB (gross. Uses namespace scope state. Needs to be refactored into a class or booted altogether). 10. Renderer 11. GPU (will also have its threads created in a separate step in a following change). Which... isn't *ideal* per-se, however getting rid of the wonky intertwining of CPU state initialization out of this mix gets rid of most of the footguns when it comes to our initialization process.
2019-04-09 20:25:54 +03:00
virtual void PageTableChanged(Common::PageTable& new_page_table,
std::size_t new_address_space_size_in_bits) = 0;
/**
* Set the Program Counter to an address
* @param addr Address to set PC to
*/
virtual void SetPC(u64 addr) = 0;
/*
* Get the current Program Counter
* @return Returns current PC
*/
virtual u64 GetPC() const = 0;
/**
* Get the current Stack Pointer
* @return Returns current SP
*/
virtual u64 GetSP() const = 0;
/**
* Get an ARM register
* @param index Register index
* @return Returns the value in the register
*/
2020-10-21 05:07:39 +03:00
virtual u64 GetReg(int index) const = 0;
2014-04-11 03:57:56 +04:00
/**
* Set an ARM register
* @param index Register index
2014-04-11 03:57:56 +04:00
* @param value Value to set register to
*/
2020-10-21 05:07:39 +03:00
virtual void SetReg(int index, u64 value) = 0;
2014-04-11 03:57:56 +04:00
/**
* Gets the value of a specified vector register.
*
* @param index The index of the vector register.
* @return the value within the vector register.
*/
2020-10-21 05:07:39 +03:00
virtual u128 GetVectorReg(int index) const = 0;
/**
* Sets a given value into a vector register.
*
* @param index The index of the vector register.
* @param value The new value to place in the register.
*/
2020-10-21 05:07:39 +03:00
virtual void SetVectorReg(int index, u128 value) = 0;
/**
* Get the current PSTATE register
* @return Returns the value of the PSTATE register
*/
virtual u32 GetPSTATE() const = 0;
2014-04-05 09:23:28 +04:00
/**
* Set the current PSTATE register
* @param pstate Value to set PSTATE to
*/
virtual void SetPSTATE(u32 pstate) = 0;
virtual VAddr GetTlsAddress() const = 0;
virtual void SetTlsAddress(VAddr address) = 0;
/**
* Gets the value within the TPIDR_EL0 (read/write software thread ID) register.
*
* @return the value within the register.
*/
virtual u64 GetTPIDR_EL0() const = 0;
/**
* Sets a new value within the TPIDR_EL0 (read/write software thread ID) register.
*
* @param value The new value to place in the register.
*/
virtual void SetTPIDR_EL0(u64 value) = 0;
virtual void SaveContext(ThreadContext32& ctx) = 0;
virtual void SaveContext(ThreadContext64& ctx) = 0;
virtual void LoadContext(const ThreadContext32& ctx) = 0;
virtual void LoadContext(const ThreadContext64& ctx) = 0;
void LoadWatchpointArray(const WatchpointArray& wp);
/// Clears the exclusive monitor's state.
virtual void ClearExclusiveState() = 0;
2022-04-03 18:29:05 +03:00
/// Signal an interrupt and ask the core to halt as soon as possible.
virtual void SignalInterrupt() = 0;
2022-07-08 03:06:46 +03:00
/// Clear a previous interrupt.
virtual void ClearInterrupt() = 0;
struct BacktraceEntry {
std::string module;
u64 address;
u64 original_address;
u64 offset;
std::string name;
};
static std::vector<BacktraceEntry> GetBacktraceFromContext(System& system,
const ThreadContext32& ctx);
static std::vector<BacktraceEntry> GetBacktraceFromContext(System& system,
const ThreadContext64& ctx);
virtual std::vector<BacktraceEntry> GetBacktrace() const = 0;
void LogBacktrace() const;
static constexpr Dynarmic::HaltReason step_thread = Dynarmic::HaltReason::Step;
static constexpr Dynarmic::HaltReason break_loop = Dynarmic::HaltReason::UserDefined2;
static constexpr Dynarmic::HaltReason svc_call = Dynarmic::HaltReason::UserDefined3;
static constexpr Dynarmic::HaltReason breakpoint = Dynarmic::HaltReason::UserDefined4;
2022-07-12 00:48:08 +03:00
static constexpr Dynarmic::HaltReason watchpoint = Dynarmic::HaltReason::MemoryAbort;
static constexpr Dynarmic::HaltReason no_execute = Dynarmic::HaltReason::UserDefined6;
protected:
/// System context that this ARM interface is running under.
System& system;
const WatchpointArray* watchpoints;
bool uses_wall_clock;
static void SymbolicateBacktrace(Core::System& system, std::vector<BacktraceEntry>& out);
const Kernel::DebugWatchpoint* MatchingWatchpoint(
VAddr addr, u64 size, Kernel::DebugWatchpointType access_type) const;
virtual Dynarmic::HaltReason RunJit() = 0;
virtual Dynarmic::HaltReason StepJit() = 0;
virtual u32 GetSvcNumber() const = 0;
virtual const Kernel::DebugWatchpoint* HaltedWatchpoint() const = 0;
virtual void RewindBreakpointInstruction() = 0;
};
} // namespace Core