suyu/src/core/hle/service/nvdrv/devices/nvhost_as_gpu.cpp

220 lines
8 KiB
C++
Raw Normal View History

// Copyright 2018 yuzu emulator team
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
2018-08-07 14:42:18 +02:00
#include <cstring>
hle/service: Default constructors and destructors in the cpp file where applicable When a destructor isn't defaulted into a cpp file, it can cause the use of forward declarations to seemingly fail to compile for non-obvious reasons. It also allows inlining of the construction/destruction logic all over the place where a constructor or destructor is invoked, which can lead to code bloat. This isn't so much a worry here, given the services won't be created and destroyed frequently. The cause of the above mentioned non-obvious errors can be demonstrated as follows: ------- Demonstrative example, if you know how the described error happens, skip forwards ------- Assume we have the following in the header, which we'll call "thing.h": \#include <memory> // Forward declaration. For example purposes, assume the definition // of Object is in some header named "object.h" class Object; class Thing { public: // assume no constructors or destructors are specified here, // or the constructors/destructors are defined as: // // Thing() = default; // ~Thing() = default; // // ... Some interface member functions would be defined here private: std::shared_ptr<Object> obj; }; If this header is included in a cpp file, (which we'll call "main.cpp"), this will result in a compilation error, because even though no destructor is specified, the destructor will still need to be generated by the compiler because std::shared_ptr's destructor is *not* trivial (in other words, it does something other than nothing), as std::shared_ptr's destructor needs to do two things: 1. Decrement the shared reference count of the object being pointed to, and if the reference count decrements to zero, 2. Free the Object instance's memory (aka deallocate the memory it's pointing to). And so the compiler generates the code for the destructor doing this inside main.cpp. Now, keep in mind, the Object forward declaration is not a complete type. All it does is tell the compiler "a type named Object exists" and allows us to use the name in certain situations to avoid a header dependency. So the compiler needs to generate destruction code for Object, but the compiler doesn't know *how* to destruct it. A forward declaration doesn't tell the compiler anything about Object's constructor or destructor. So, the compiler will issue an error in this case because it's undefined behavior to try and deallocate (or construct) an incomplete type and std::shared_ptr and std::unique_ptr make sure this isn't the case internally. Now, if we had defaulted the destructor in "thing.cpp", where we also include "object.h", this would never be an issue, as the destructor would only have its code generated in one place, and it would be in a place where the full class definition of Object would be visible to the compiler. ---------------------- End example ---------------------------- Given these service classes are more than certainly going to change in the future, this defaults the constructors and destructors into the relevant cpp files to make the construction and destruction of all of the services consistent and unlikely to run into cases where forward declarations are indirectly causing compilation errors. It also has the plus of avoiding the need to rebuild several services if destruction logic changes, since it would only be necessary to recompile the single cpp file.
2018-09-11 03:20:52 +02:00
#include <utility>
#include "common/assert.h"
#include "common/logging/log.h"
#include "core/core.h"
#include "core/hle/service/nvdrv/devices/nvhost_as_gpu.h"
#include "core/hle/service/nvdrv/devices/nvmap.h"
2018-08-07 14:42:18 +02:00
#include "video_core/memory_manager.h"
#include "video_core/rasterizer_interface.h"
#include "video_core/renderer_base.h"
namespace Service::Nvidia::Devices {
namespace NvErrCodes {
enum {
InvalidNmapHandle = -22,
};
}
hle/service: Default constructors and destructors in the cpp file where applicable When a destructor isn't defaulted into a cpp file, it can cause the use of forward declarations to seemingly fail to compile for non-obvious reasons. It also allows inlining of the construction/destruction logic all over the place where a constructor or destructor is invoked, which can lead to code bloat. This isn't so much a worry here, given the services won't be created and destroyed frequently. The cause of the above mentioned non-obvious errors can be demonstrated as follows: ------- Demonstrative example, if you know how the described error happens, skip forwards ------- Assume we have the following in the header, which we'll call "thing.h": \#include <memory> // Forward declaration. For example purposes, assume the definition // of Object is in some header named "object.h" class Object; class Thing { public: // assume no constructors or destructors are specified here, // or the constructors/destructors are defined as: // // Thing() = default; // ~Thing() = default; // // ... Some interface member functions would be defined here private: std::shared_ptr<Object> obj; }; If this header is included in a cpp file, (which we'll call "main.cpp"), this will result in a compilation error, because even though no destructor is specified, the destructor will still need to be generated by the compiler because std::shared_ptr's destructor is *not* trivial (in other words, it does something other than nothing), as std::shared_ptr's destructor needs to do two things: 1. Decrement the shared reference count of the object being pointed to, and if the reference count decrements to zero, 2. Free the Object instance's memory (aka deallocate the memory it's pointing to). And so the compiler generates the code for the destructor doing this inside main.cpp. Now, keep in mind, the Object forward declaration is not a complete type. All it does is tell the compiler "a type named Object exists" and allows us to use the name in certain situations to avoid a header dependency. So the compiler needs to generate destruction code for Object, but the compiler doesn't know *how* to destruct it. A forward declaration doesn't tell the compiler anything about Object's constructor or destructor. So, the compiler will issue an error in this case because it's undefined behavior to try and deallocate (or construct) an incomplete type and std::shared_ptr and std::unique_ptr make sure this isn't the case internally. Now, if we had defaulted the destructor in "thing.cpp", where we also include "object.h", this would never be an issue, as the destructor would only have its code generated in one place, and it would be in a place where the full class definition of Object would be visible to the compiler. ---------------------- End example ---------------------------- Given these service classes are more than certainly going to change in the future, this defaults the constructors and destructors into the relevant cpp files to make the construction and destruction of all of the services consistent and unlikely to run into cases where forward declarations are indirectly causing compilation errors. It also has the plus of avoiding the need to rebuild several services if destruction logic changes, since it would only be necessary to recompile the single cpp file.
2018-09-11 03:20:52 +02:00
nvhost_as_gpu::nvhost_as_gpu(std::shared_ptr<nvmap> nvmap_dev) : nvmap_dev(std::move(nvmap_dev)) {}
nvhost_as_gpu::~nvhost_as_gpu() = default;
u32 nvhost_as_gpu::ioctl(Ioctl command, const std::vector<u8>& input, std::vector<u8>& output) {
2018-07-02 18:13:26 +02:00
LOG_DEBUG(Service_NVDRV, "called, command=0x{:08X}, input_size=0x{:X}, output_size=0x{:X}",
2018-07-02 18:20:50 +02:00
command.raw, input.size(), output.size());
switch (static_cast<IoctlCommand>(command.raw)) {
case IoctlCommand::IocInitalizeExCommand:
return InitalizeEx(input, output);
case IoctlCommand::IocAllocateSpaceCommand:
return AllocateSpace(input, output);
case IoctlCommand::IocMapBufferExCommand:
return MapBufferEx(input, output);
case IoctlCommand::IocBindChannelCommand:
return BindChannel(input, output);
case IoctlCommand::IocGetVaRegionsCommand:
return GetVARegions(input, output);
case IoctlCommand::IocUnmapBufferCommand:
return UnmapBuffer(input, output);
}
if (static_cast<IoctlCommand>(command.cmd.Value()) == IoctlCommand::IocRemapCommand)
return Remap(input, output);
UNIMPLEMENTED_MSG("Unimplemented ioctl command");
return 0;
}
u32 nvhost_as_gpu::InitalizeEx(const std::vector<u8>& input, std::vector<u8>& output) {
IoctlInitalizeEx params{};
std::memcpy(&params, input.data(), input.size());
2018-07-02 18:13:26 +02:00
LOG_WARNING(Service_NVDRV, "(STUBBED) called, big_page_size=0x{:X}", params.big_page_size);
return 0;
}
u32 nvhost_as_gpu::AllocateSpace(const std::vector<u8>& input, std::vector<u8>& output) {
IoctlAllocSpace params{};
std::memcpy(&params, input.data(), input.size());
2018-07-02 18:13:26 +02:00
LOG_DEBUG(Service_NVDRV, "called, pages={:X}, page_size={:X}, flags={:X}", params.pages,
2018-07-02 18:20:50 +02:00
params.page_size, params.flags);
auto& gpu = Core::System::GetInstance().GPU();
const u64 size{static_cast<u64>(params.pages) * static_cast<u64>(params.page_size)};
if (params.flags & 1) {
params.offset = gpu.MemoryManager().AllocateSpace(params.offset, size, 1);
} else {
params.offset = gpu.MemoryManager().AllocateSpace(size, params.align);
}
std::memcpy(output.data(), &params, output.size());
return 0;
}
u32 nvhost_as_gpu::Remap(const std::vector<u8>& input, std::vector<u8>& output) {
std::size_t num_entries = input.size() / sizeof(IoctlRemapEntry);
2018-07-02 18:13:26 +02:00
LOG_WARNING(Service_NVDRV, "(STUBBED) called, num_entries=0x{:X}", num_entries);
std::vector<IoctlRemapEntry> entries(num_entries);
std::memcpy(entries.data(), input.data(), input.size());
auto& gpu = Core::System::GetInstance().GPU();
for (const auto& entry : entries) {
2018-07-02 18:13:26 +02:00
LOG_WARNING(Service_NVDRV, "remap entry, offset=0x{:X} handle=0x{:X} pages=0x{:X}",
2018-07-02 18:20:50 +02:00
entry.offset, entry.nvmap_handle, entry.pages);
Tegra::GPUVAddr offset = static_cast<Tegra::GPUVAddr>(entry.offset) << 0x10;
auto object = nvmap_dev->GetObject(entry.nvmap_handle);
if (!object) {
LOG_CRITICAL(Service_NVDRV, "nvmap {} is an invalid handle!", entry.nvmap_handle);
std::memcpy(output.data(), entries.data(), output.size());
return static_cast<u32>(NvErrCodes::InvalidNmapHandle);
}
ASSERT(object->status == nvmap::Object::Status::Allocated);
u64 size = static_cast<u64>(entry.pages) << 0x10;
ASSERT(size <= object->size);
Tegra::GPUVAddr returned = gpu.MemoryManager().MapBufferEx(object->addr, offset, size);
ASSERT(returned == offset);
}
std::memcpy(output.data(), entries.data(), output.size());
return 0;
}
u32 nvhost_as_gpu::MapBufferEx(const std::vector<u8>& input, std::vector<u8>& output) {
IoctlMapBufferEx params{};
std::memcpy(&params, input.data(), input.size());
2018-07-02 18:13:26 +02:00
LOG_DEBUG(Service_NVDRV,
2018-07-02 18:20:50 +02:00
"called, flags={:X}, nvmap_handle={:X}, buffer_offset={}, mapping_size={}"
", offset={}",
params.flags, params.nvmap_handle, params.buffer_offset, params.mapping_size,
params.offset);
if (!params.nvmap_handle) {
return 0;
}
auto object = nvmap_dev->GetObject(params.nvmap_handle);
ASSERT(object);
// We can only map objects that have already been assigned a CPU address.
ASSERT(object->status == nvmap::Object::Status::Allocated);
ASSERT(params.buffer_offset == 0);
// The real nvservices doesn't make a distinction between handles and ids, and
// object can only have one handle and it will be the same as its id. Assert that this is the
// case to prevent unexpected behavior.
ASSERT(object->id == params.nvmap_handle);
auto& gpu = Core::System::GetInstance().GPU();
if (params.flags & 1) {
params.offset = gpu.MemoryManager().MapBufferEx(object->addr, params.offset, object->size);
} else {
params.offset = gpu.MemoryManager().MapBufferEx(object->addr, object->size);
}
// Create a new mapping entry for this operation.
ASSERT_MSG(buffer_mappings.find(params.offset) == buffer_mappings.end(),
"Offset is already mapped");
BufferMapping mapping{};
mapping.nvmap_handle = params.nvmap_handle;
mapping.offset = params.offset;
mapping.size = object->size;
buffer_mappings[params.offset] = mapping;
std::memcpy(output.data(), &params, output.size());
return 0;
}
u32 nvhost_as_gpu::UnmapBuffer(const std::vector<u8>& input, std::vector<u8>& output) {
IoctlUnmapBuffer params{};
std::memcpy(&params, input.data(), input.size());
2018-07-02 18:13:26 +02:00
LOG_DEBUG(Service_NVDRV, "called, offset=0x{:X}", params.offset);
const auto itr = buffer_mappings.find(params.offset);
if (itr == buffer_mappings.end()) {
LOG_WARNING(Service_NVDRV, "Tried to unmap an invalid offset 0x{:X}", params.offset);
// Hardware tests shows that unmapping an already unmapped buffer always returns successful
// and doesn't fail.
return 0;
}
auto& system_instance = Core::System::GetInstance();
// Remove this memory region from the rasterizer cache.
auto& gpu = system_instance.GPU();
auto cpu_addr = gpu.MemoryManager().GpuToCpuAddress(params.offset);
ASSERT(cpu_addr);
system_instance.Renderer().Rasterizer().FlushAndInvalidateRegion(*cpu_addr, itr->second.size);
params.offset = gpu.MemoryManager().UnmapBuffer(params.offset, itr->second.size);
buffer_mappings.erase(itr->second.offset);
std::memcpy(output.data(), &params, output.size());
return 0;
}
u32 nvhost_as_gpu::BindChannel(const std::vector<u8>& input, std::vector<u8>& output) {
IoctlBindChannel params{};
std::memcpy(&params, input.data(), input.size());
2018-07-02 18:13:26 +02:00
LOG_DEBUG(Service_NVDRV, "called, fd={:X}", params.fd);
channel = params.fd;
return 0;
}
u32 nvhost_as_gpu::GetVARegions(const std::vector<u8>& input, std::vector<u8>& output) {
IoctlGetVaRegions params{};
std::memcpy(&params, input.data(), input.size());
2018-07-02 18:13:26 +02:00
LOG_WARNING(Service_NVDRV, "(STUBBED) called, buf_addr={:X}, buf_size={:X}", params.buf_addr,
2018-07-02 18:20:50 +02:00
params.buf_size);
params.buf_size = 0x30;
params.regions[0].offset = 0x04000000;
params.regions[0].page_size = 0x1000;
params.regions[0].pages = 0x3fbfff;
params.regions[1].offset = 0x04000000;
params.regions[1].page_size = 0x10000;
params.regions[1].pages = 0x1bffff;
// TODO(ogniK): This probably can stay stubbed but should add support way way later
std::memcpy(output.data(), &params, output.size());
return 0;
}
} // namespace Service::Nvidia::Devices