【Android Binder】从源码出发,剖析Binder机制的实现流程
文章目录
要深入理解Android的binder体系,仅仅通过了解别人的总结是很难达做到的。
这篇文章,我们整体以《深入理解Android:卷1》的binder章节内容为线索,来剖析Android12源码中mediaserver的注册过程中,binder的执行机制。
main函数
main_mediaserver.cpp
int main(int argc __unused, char **argv __unused)
{
signal(SIGPIPE, SIG_IGN);
sp<ProcessState> proc(ProcessState::self());
sp<IServiceManager> sm(defaultServiceManager());
ALOGI("ServiceManager: %p", sm.get());
MediaPlayerService::instantiate();
ResourceManagerService::instantiate();
registerExtensions();
::android::hardware::configureRpcThreadpool(16, false);
ProcessState::self()->startThreadPool();
IPCThreadState::self()->joinThreadPool();
::android::hardware::joinRpcThreadpool();
}
首先获取ProcessState对象,sm作为ServerManager的客户端,需要ServerManager注册服务。
接着初始化音频系统的AudioFlinger服务。
ProcessState::self()
ProcessState.cpp
sp<ProcessState> proc(ProcessState::self());
#ifdef __ANDROID_VNDK__
const char* kDefaultDriver = "/dev/vndbinder";
#else
const char* kDefaultDriver = "/dev/binder";
#endif
sp<ProcessState> ProcessState::self()
{
return init(kDefaultDriver, false /*requireDefault*/);
}
这里kDefaultDriver不为null。
sp<ProcessState> ProcessState::init(const char* driver, bool requireDefault) {
if (driver == nullptr) {
std::lock_guard<std::mutex> l(gProcessMutex);
if (gProcess) {
verifyNotForked(gProcess->mForked);
}
return gProcess;
}
[[clang::no_destroy]] static std::once_flag gProcessOnce;
std::call_once(gProcessOnce, [&](){
if (access(driver, R_OK) == -1) {
ALOGE("Binder driver %s is unavailable. Using /dev/binder instead.", driver);
driver = "/dev/binder";
}
if (0 == strcmp(driver, "/dev/vndbinder") && !isVndservicemanagerEnabled()) {
ALOGE("vndservicemanager is not started on this device, you can save resources/threads "
"by not initializing ProcessState with /dev/vndbinder.");
}
// we must install these before instantiating the gProcess object,
// otherwise this would race with creating it, and there could be the
// possibility of an invalid gProcess object forked by another thread
// before these are installed
int ret = pthread_atfork(ProcessState::onFork, ProcessState::parentPostFork,
ProcessState::childPostFork);
LOG_ALWAYS_FATAL_IF(ret != 0, "pthread_atfork error %s", strerror(ret));
std::lock_guard<std::mutex> l(gProcessMutex);
gProcess = sp<ProcessState>::make(driver);
});
if (requireDefault) {
// Detect if we are trying to initialize with a different driver, and
// consider that an error. ProcessState will only be initialized once above.
LOG_ALWAYS_FATAL_IF(gProcess->getDriverName() != driver,
"ProcessState was already initialized with %s,"
" can't initialize with %s.",
gProcess->getDriverName().c_str(), driver);
}
verifyNotForked(gProcess->mForked);
return gProcess;
}
执行gProcess = sp<ProcessState>::make(driver);
的时候new了ProcessState出来,并且保证在多线程环境里面只执行一次。
它的构造函数:
#define BINDER_VM_SIZE ((1 * 1024 * 1024) - sysconf(_SC_PAGE_SIZE) * 2)
#define DEFAULT_MAX_BINDER_THREADS 15
#define DEFAULT_ENABLE_ONEWAY_SPAM_DETECTION 1
ProcessState::ProcessState(const char* driver)
: mDriverName(String8(driver)),
mDriverFD(-1),
mVMStart(MAP_FAILED),
mThreadCountLock(PTHREAD_MUTEX_INITIALIZER),
mThreadCountDecrement(PTHREAD_COND_INITIALIZER),
mExecutingThreadsCount(0),
mWaitingForThreads(0),
mMaxThreads(DEFAULT_MAX_BINDER_THREADS),
mCurrentThreads(0),
mKernelStartedThreads(0),
mStarvationStartTimeMs(0),
mForked(false),
mThreadPoolStarted(false),
mThreadPoolSeq(1),
mCallRestriction(CallRestriction::NONE) {
unique_fd opened = open_driver(driver);
if (opened.ok()) {
// mmap the binder, providing a chunk of virtual address space to receive transactions.
mVMStart = mmap(nullptr, BINDER_VM_SIZE, PROT_READ, MAP_PRIVATE | MAP_NORESERVE,
opened.get(), 0);
if (mVMStart == MAP_FAILED) {
// *sigh*
ALOGE("Using %s failed: unable to mmap transaction memory.", driver);
opened.reset();
mDriverName.clear();
}
}
这个构造函数使用 mmap 来映射 Binder,提供一块虚拟地址空间来接收事务。
BINDER_VM_SIZE的大小定义为1M-2page。
打开binder设备
static unique_fd open_driver(const char* driver) {
auto fd = unique_fd(open(driver, O_RDWR | O_CLOEXEC));
if (!fd.ok()) {
PLOGE("Opening '%s' failed", driver);
return {};
}
int vers = 0;
int result = ioctl(fd.get(), BINDER_VERSION, &vers);
if (result == -1) {
PLOGE("Binder ioctl to obtain version failed");
return {};
}
if (result != 0 || vers != BINDER_CURRENT_PROTOCOL_VERSION) {
ALOGE("Binder driver protocol(%d) does not match user space protocol(%d)! "
"ioctl() return value: %d",
vers, BINDER_CURRENT_PROTOCOL_VERSION, result);
return {};
}
size_t maxThreads = DEFAULT_MAX_BINDER_THREADS;
result = ioctl(fd.get(), BINDER_SET_MAX_THREADS, &maxThreads);
if (result == -1) {
ALOGE("Binder ioctl to set max threads failed: %s", strerror(errno));
}
uint32_t enable = DEFAULT_ENABLE_ONEWAY_SPAM_DETECTION;
result = ioctl(fd.get(), BINDER_ENABLE_ONEWAY_SPAM_DETECTION, &enable);
if (result == -1) {
ALOGE_IF(ProcessState::isDriverFeatureEnabled(
ProcessState::DriverFeature::ONEWAY_SPAM_DETECTION),
"Binder ioctl to enable oneway spam detection failed: %s", strerror(errno));
}
return fd;
}
这里打开/dev/binder设备,定义maxThreads 为15,并通过ioctl的方式告诉驱动,这个fd支持的最大线程数是15个。
ioctl
是一个用于设备控制的系统调用,通常用于与设备驱动程序进行通信。ioctl允许用户空间程序向设备驱动程序发送命令,并且可以传递参数以控制设备的行为。ioctl的名称源自"input/output control"。
/dev/binder 则是 Binder 驱动程序所提供的设备接口。通过打开 /dev/binder 设备文件,程序可以与 Binder 驱动程序进行通信,执行相关的 IPC 操作。
到这里我们可以总结下ProcessState::self()的做了什么事情:
- 打开了/dev/binder设备相当于与内核的binder驱动有了交互的通道。
- 对于返回的fd使用mmap,这样binder驱动就会分配一块内存来接受数据。
- 由于ProcessState是个单例,所以一个进程只打开设备一次。
接下来我们分析第二个关键的函数:defaultServiceManager()
defaultServiceManager()
sp<IServiceManager> defaultServiceManager()
{
std::call_once(gSmOnce, []() {
#if defined(__BIONIC__) && !defined(__ANDROID_VNDK__)
/* wait for service manager */ {
using std::literals::chrono_literals::operator""s;
using android::base::WaitForProperty;
while (!WaitForProperty("servicemanager.ready", "true", 1s)) {
ALOGE("Waited for servicemanager.ready for a second, waiting another...");
}
}
#endif
sp<AidlServiceManager> sm = nullptr;
while (sm == nullptr) {
//important 1
sm = interface_cast<AidlServiceManager>(ProcessState::self()->getContextObject(nullptr));
if (sm == nullptr) {
ALOGE("Waiting 1s on context object on %s.", ProcessState::self()->getDriverName().c_str());
sleep(1);
}
}
//important 2
gDefaultServiceManager = sp<ServiceManagerShim>::make(sm);
});
return gDefaultServiceManager;
}
这里调用了ProcessState的getContextObject方法,来看下:
sp<IBinder> ProcessState::getContextObject(const sp<IBinder>& /*caller*/)
{
//这里返回的是IBinder
sp<IBinder> context = getStrongProxyForHandle(0);
if (context) {
// The root object is special since we get it directly from the driver, it is never
// written by Parcell::writeStrongBinder.
internal::Stability::markCompilationUnit(context.get());
} else {
ALOGW("Not able to get context object on %s.", mDriverName.c_str());
}
return context;
}
这里的getStrongProxyForHandle方法需要注意下,它是对资源的一种标识。也就是说有一组资源项保存在一个资源数组中,这里handle参数的值就是这个资源项在数组中的索引。
看下getStrongProxyForHandle做了什么:
// see b/166779391: cannot change the VNDK interface, so access like this
extern sp<BBinder> the_context_object;
sp<IBinder> ProcessState::getStrongProxyForHandle(int32_t handle)
{
sp<IBinder> result;
std::unique_lock<std::mutex> _l(mLock);
if (handle == 0 && the_context_object != nullptr) return the_context_object;
//查找资源项
handle_entry* e = lookupHandleLocked(handle);
if (e != nullptr) {
// We need to create a new BpBinder if there isn't currently one, OR we
// are unable to acquire a weak reference on this current one. The
// attemptIncWeak() is safe because we know the BpBinder destructor will always
// call expungeHandle(), which acquires the same lock we are holding now.
// We need to do this because there is a race condition between someone
// releasing a reference on this BpBinder, and a new reference on its handle
// arriving from the driver.
IBinder* b = e->binder;
//对于新创建的资源项,它的binder是空,会走这里的分支
if (b == nullptr || !e->refs->attemptIncWeak(this)) {
if (handle == 0) {
// Special case for context manager...
// The context manager is the only object for which we create
// a BpBinder proxy without already holding a reference.
// Perform a dummy transaction to ensure the context manager
// is registered before we create the first local reference
// to it (which will occur when creating the BpBinder).
// If a local reference is created for the BpBinder when the
// context manager is not present, the driver will fail to
// provide a reference to the context manager, but the
// driver API does not return status.
//
// Note that this is not race-free if the context manager
// dies while this code runs.
IPCThreadState* ipc = IPCThreadState::self();
CallRestriction originalCallRestriction = ipc->getCallRestriction();
ipc->setCallRestriction(CallRestriction::NONE);
Parcel data;
status_t status = ipc->transact(
0, IBinder::PING_TRANSACTION, data, nullptr, 0);
ipc->setCallRestriction(originalCallRestriction);
if (status == DEAD_OBJECT)
return nullptr;
}
//创建一个BpBinder
sp<BpBinder> b = BpBinder::PrivateAccessor::create(handle);
//填充entry的内容
e->binder = b.get();
if (b) e->refs = b->getWeakRefs();
result = b;
} else {
// This little bit of nastyness is to allow us to add a primary
// reference to the remote proxy when this team doesn't have one
// but another team is sending the handle to us.
result.force_set(b);
e->refs->decWeak(this);
}
}
return result;
}
这里的重点工作是new出来BpBinder对象。
BpBinder
BpBinder是客户端,是用来与Server交互的代理类,p就是proxy。
Android里面还有BBinder类,BBinder是与proxy相对的一端,它是proxy交互的目的端。
BpBinder和BBinder一一对应,某个BpBinder对象只能和对应的BBinder进行交互。
这里有个需要我们思考的问题:
Q:为什么创建的不是BBinder呢?
A:因为我们是ServiceManager的客户端,所以使用代理端和ServiceManager进行交互。
还有一个问题:
Q:既然BpBinder和BBinder是一一对应的,那么怎么通过什么确定它们是对应的呢?
A:Binder体系通过handle来标识对应的BBinder,前面代码中的getStrongProxyForHandle(0);
把0传递进去,一直传递到sp<BpBinder> b = BpBinder::PrivateAccessor::create(handle);
,而实际上这个0在整个Binder体系中有着重要的作用:0代表的就是ServiceManager对应的BBinder。
来看下BpBinder的实现类代码:
frameworks/native/libs/binder/BpBinder.cpp
BpBinder::BpBinder(BinderHandle&& handle, int32_t trackedUid) : BpBinder(Handle(handle)) {
if constexpr (!kEnableKernelIpc) {
LOG_ALWAYS_FATAL("Binder kernel driver disabled at build time");
return;
}
mTrackedUid = trackedUid;
ALOGV("Creating BpBinder %p handle %d\n", this, this->binderHandle());
IPCThreadState::self()->incWeakHandle(this->binderHandle(), this);
}
BpBinder的构造也太简单了,这里有个IPCThreadState 比较重要,但是不是目前的重点。
到这里我们似乎没有找到BpBinder和BBinder通过ProcessState来打开/dev/binder进行通信的足迹。
interface_cast< AidlServiceManager>
回看defaultServiceManager()
方法里面的这两行代码:
sm = interface_cast<AidlServiceManager>(ProcessState::self()->getContextObject(nullptr));
...
gDefaultServiceManager = sp<ServiceManagerShim>::make(sm);
看下AidlServiceManager的实现:
frameworks/native/libs/binder/IServiceManager.cpp
using AidlServiceManager = android::os::IServiceManager;
AidlServiceManager 其实就是IServiceManager
再看下interface_cast的实现:
/**
* If this is a local object and the descriptor matches, this will return the
* actual local object which is implementing the interface. Otherwise, this will
* return a proxy to the interface without checking the interface descriptor.
* This means that subsequent calls may fail with BAD_TYPE.
*/
template<typename INTERFACE>
inline sp<INTERFACE> interface_cast(const sp<IBinder>& obj)
{
return INTERFACE::asInterface(obj);
}
到这里可以我们就可以得知了gDefaultServiceManager其实就是IServiceManager。
接着我们去分析它:
IServiceManager
frameworks/native/libs/binder/include/binder/IServiceManager.h
/**
* Retrieve an existing service, blocking for a few seconds if it doesn't yet exist. This
* does polling. A more efficient way to make sure you unblock as soon as the service is
* available is to use waitForService or to use service notifications.
*
* Warning: when using this API, typically, you should call it in a loop. It's dangerous to
* assume that nullptr could mean that the service is not available. The service could just
* be starting. Generally, whether a service exists, this information should be declared
* externally (for instance, an Android feature might imply the existence of a service,
* a system property, or in the case of services in the VINTF manifest, it can be checked
* with isDeclared).
*/
[[deprecated("this polls for 5s, prefer waitForService or checkService")]]
virtual sp<IBinder> getService(const String16& name) const = 0;
/**
* Retrieve an existing service, non-blocking.
*/
virtual sp<IBinder> checkService( const String16& name) const = 0;
/**
* Register a service.
*/
// NOLINTNEXTLINE(google-default-arguments)
virtual status_t addService(const String16& name, const sp<IBinder>& service,
bool allowIsolated = false,
int dumpsysFlags = DUMP_FLAG_PRIORITY_DEFAULT) = 0;
/**
* Return list of all existing services.
*/
// NOLINTNEXTLINE(google-default-arguments)
virtual Vector<String16> listServices(int dumpsysFlags = DUMP_FLAG_PRIORITY_ALL) = 0;
截取部分代码如上,可以发现有各种和服务操作相关的函数定义。注释还告诉我们getService的时候记得确认检测是否非空。接着找线索:
IInterface.h
宏:DECLARE_META_INTERFACE
frameworks/native/libs/binder/include/binder/IInterface.h
#define DECLARE_META_INTERFACE(INTERFACE) \
public: \
static const ::android::String16 descriptor; \
static ::android::sp<I##INTERFACE> asInterface(const ::android::sp<::android::IBinder>& obj); \
virtual const ::android::String16& getInterfaceDescriptor() const; \
I##INTERFACE(); \
virtual ~I##INTERFACE(); \
static bool setDefaultImpl(::android::sp<I##INTERFACE> impl); \
static const ::android::sp<I##INTERFACE>& getDefaultImpl(); \
\
private: \
static ::android::sp<I##INTERFACE> default_impl; \
\
DECLARE_META_INTERFACE宏定义了一些函数和一个变量。
把宏代入我们前面的interface_cast< AidlServiceManager>代码:
sp<AidlServiceManager> sm = nullptr;
while (sm == nullptr) {
sm = interface_cast<AidlServiceManager>(ProcessState::self()->getContextObject(nullptr));
if (sm == nullptr) {
ALOGE("Waiting 1s on context object on %s.", ProcessState::self()->getDriverName().c_str());
sleep(1);
}
结合模板函数interface_cast和上面的宏DECLARE_META_INTERFACE(INTERFACE) ,再结合前面得知的AidlServiceManager其实是IServiceManager。因此代码等价于:
sp<IServiceManager> sm = nullptr;
while (sm == nullptr) {
sm = IServiceManager::asInterface(ProcessState::self()->getContextObject(nullptr));
if (sm == nullptr) {
ALOGE("Waiting 1s on context object on %s.", ProcessState::self()->getDriverName().c_str());
sleep(1);
}
}
接着往下看,发现有个宏:IMPLEMENT_META_INTERFACE。
前面我们知道DECLARE_META_INTERFACE定义了interface,那么IMPLEMENT_META_INTERFACE一定就是实现了interface。
下面验证下这个猜想对不对:
宏:IMPLEMENT_META_INTERFACE
#define IMPLEMENT_META_INTERFACE(INTERFACE, NAME) \
static_assert(internal::allowedManualInterface(NAME), \
"b/64223827: Manually written binder interfaces are " \
"considered error prone and frequently have bugs. " \
"The preferred way to add interfaces is to define " \
"an .aidl file to auto-generate the interface. If " \
"an interface must be manually written, add its " \
"name to the allowlist."); \
DO_NOT_DIRECTLY_USE_ME_IMPLEMENT_META_INTERFACE(INTERFACE, NAME)
#define IMPLEMENT_META_INTERFACE(INTERFACE, NAME) \
DO_NOT_DIRECTLY_USE_ME_IMPLEMENT_META_INTERFACE(INTERFACE, NAME) \
#endif
// Macro for an interface type.
#define DO_NOT_DIRECTLY_USE_ME_IMPLEMENT_META_INTERFACE(INTERFACE, NAME) \
const ::android::StaticString16 I##INTERFACE##_descriptor_static_str16( \
__IINTF_CONCAT(u, NAME)); \
const ::android::String16 I##INTERFACE::descriptor(I##INTERFACE##_descriptor_static_str16); \
DO_NOT_DIRECTLY_USE_ME_IMPLEMENT_META_INTERFACE0(I##INTERFACE, I##INTERFACE, Bp##INTERFACE)
// Macro to be used by both IMPLEMENT_META_INTERFACE and IMPLEMENT_META_NESTED_INTERFACE
#define DO_NOT_DIRECTLY_USE_ME_IMPLEMENT_META_INTERFACE0(ITYPE, INAME, BPTYPE) \
const ::android::String16& ITYPE::getInterfaceDescriptor() const { return ITYPE::descriptor; } \
::android::sp<ITYPE> ITYPE::asInterface(const ::android::sp<::android::IBinder>& obj) { \
::android::sp<ITYPE> intr; \
if (obj != nullptr) { \
intr = ::android::sp<ITYPE>::cast(obj->queryLocalInterface(ITYPE::descriptor)); \
if (intr == nullptr) { \
intr = ::android::sp<BPTYPE>::make(obj); \
} \
} \
return intr; \
} \
::android::sp<ITYPE> ITYPE::default_impl; \
bool ITYPE::setDefaultImpl(::android::sp<ITYPE> impl) { \
/* Only one user of this interface can use this function */ \
/* at a time. This is a heuristic to detect if two different */ \
/* users in the same process use this function. */ \
assert(!ITYPE::default_impl); \
if (impl) { \
ITYPE::default_impl = std::move(impl); \
return true; \
} \
return false; \
} \
const ::android::sp<ITYPE>& ITYPE::getDefaultImpl() { return ITYPE::default_impl; } \
ITYPE::INAME() {} \
代码到这里,确实验证了前面的猜想,宏IMPLEMENT_META_INTERFACE就是实现了宏DECLARE_META_INTERFACE。
而这行代码:intr = ::android::sp<BPTYPE>::make(obj);
显然是new出来一个对象 ,这里的BPTYPE其实就是传进来的宏Bp##INTERFACE
。因此我们可以得知创建的就是BpServiceManager对象,
阶段总结
在这里我们可以做一个总结:
回看看看这行代码:
sm = interface_cast<AidlServiceManager>(ProcessState::self()->getContextObject(nullptr));
这行代码中ProcessState::self()->getContextObject(nullptr)
创建了BpBinder对象,然后把BpBinder对象作为参数新建了了一个BpServiceManager对象。sm就是这个BpServiceManager对象。
理清关系
我们知道BpBinder对象和BBinder有通信关系,但是这里为什么杀出一个BpServiceManager对象呢?他们之间有什么联系?
通过他们之间的继承关系获取我们能找到线索:
BpServiceManager没有直接继承BBinder,那么他是如何进行跟Binder进行通信的呢?
其实左边的BpRefBase里面的mRemote就是BpBinder。
我们看下面代码:
prebuilts/vndk/v30/x86/include/generated-headers/frameworks/native/libs/binder/libbinder/android_vendor.30_x86_shared/gen/aidl/android/os/BpServiceManager.h
class BpServiceManager : public ::android::BpInterface<IServiceManager> {
public:
explicit BpServiceManager(const ::android::sp<::android::IBinder>& _aidl_impl);
virtual ~BpServiceManager() = default;
...
}; // class BpServiceManager
BpServiceManager的构造函数中,参数_aidl_impl是IBinder类型,看来和Binder有间接关系。实际上它是BpBinder,接着看BpInterface:
BpInterface
frameworks/native/libs/binder/include/binder/IInterface.h
template<typename INTERFACE>
class BpInterface : public INTERFACE, public BpRefBase
{
public:
explicit BpInterface(const sp<IBinder>& remote);
typedef INTERFACE BaseInterface;
protected:
virtual IBinder* onAsBinder();
};
BpRefBase:
frameworks/native/libs/binder/Binder.cpp
BpRefBase::BpRefBase(const sp<IBinder>& o)
: mRemote(o.get()), mRefs(nullptr), mState(0)
{
extendObjectLifetime(OBJECT_LIFETIME_WEAK);
if (mRemote) {
mRemote->incStrong(this); // Removed on first IncStrong().
mRefs = mRemote->createWeak(this); // Held for our entire lifetime.
}
}
到这里,通过构造函数mRemote终于指向前面那个new出来的BpBinder对象了。
阶段总结
回想下defaultServiceManager()函数,有两个关键对象:
- 一个是BpBinder对象,它的handle值为0,。
- 一个BpServiceManager对象,它的mRemote值为BpBinder对象。
而且BpServiceManager实现IServiceManager接口,又有了BpBinder作为通信代表。到这里进行通信的准备做的差不多了。但是还差些什么。
下面接着分析MediaPlayServer的注册过程。
注册MediaPlayerService服务
现在回头看下main_mediaserver.cpp的main函数:
int main(int argc __unused, char **argv __unused)
{
signal(SIGPIPE, SIG_IGN);
sp<ProcessState> proc(ProcessState::self());
sp<IServiceManager> sm(defaultServiceManager());
ALOGI("ServiceManager: %p", sm.get());
MediaPlayerService::instantiate();
ResourceManagerService::instantiate();
registerExtensions();
::android::hardware::configureRpcThreadpool(16, false);
ProcessState::self()->startThreadPool();
IPCThreadState::self()->joinThreadPool();
::android::hardware::joinRpcThreadpool();
}
MediaPlayerService::instantiate();
//frameworks/av/media/libmediaplayerservice/MediaPlayerService.cpp
void MediaPlayerService::instantiate() {
defaultServiceManager()->addService(
String16("media.player"), new MediaPlayerService());
}
//frameworks/native/libs/binder/include/binder/IServiceManager.h
/**
* Register a service.
*/
// NOLINTNEXTLINE(google-default-arguments)
virtual status_t addService(const String16& name, const sp<IBinder>& service,
bool allowIsolated = false,
int dumpsysFlags = DUMP_FLAG_PRIORITY_DEFAULT) = 0;
//frameworks/native/libs/binder/IServiceManager.cpp
status_t ServiceManagerShim::addService(const String16& name, const sp<IBinder>& service,
bool allowIsolated, int dumpsysPriority)
{
Status status = mTheRealServiceManager->addService(
String8(name).c_str(), service, allowIsolated, dumpsysPriority);
return status.exceptionCode();
}
前面分析中,我们知道defaultServiceManager()返回的是BpServiceManager,所以这里的mTheRealServiceManager就是BpServiceManager。
来看下它的实现,这里的代码是通过AIDL文件生成的代码:
//out/soong/.intermediates/frameworks/native/libs/binder/libbinder/android_arm_armv8-a_shared/gen/aidl/android/os/IServiceManager.cpp
namespace android {
namespace os {
BpServiceManager::BpServiceManager(const ::android::sp<::android::IBinder>& _aidl_impl)
: BpInterface<IServiceManager>(_aidl_impl){//_aidl_impl就是BpBinder(0)实例
}
--------------------------------------------------
::android::binder::Status BpServiceManager::addService(const ::std::string& name, const ::android::sp<::android::IBinder>& service, bool allowIsolated, int32_t dumpPriority) {
::android::Parcel _aidl_data;
_aidl_data.markForBinder(remoteStrong());//0、和rpc binder有关
::android::Parcel _aidl_reply;
::android::status_t _aidl_ret_status = ::android::OK;
::android::binder::Status _aidl_status;
//1、写interface
_aidl_ret_status = _aidl_data.writeInterfaceToken(getInterfaceDescriptor());
if (((_aidl_ret_status) != (::android::OK))) {
goto _aidl_error;
}
//2、写name
_aidl_ret_status = _aidl_data.writeUtf8AsUtf16(name);
if (((_aidl_ret_status) != (::android::OK))) {
goto _aidl_error;
}
//3、写binder对象
_aidl_ret_status = _aidl_data.writeStrongBinder(service);
if (((_aidl_ret_status) != (::android::OK))) {
goto _aidl_error;
}
//4、写allowIsolated
_aidl_ret_status = _aidl_data.writeBool(allowIsolated);
if (((_aidl_ret_status) != (::android::OK))) {
goto _aidl_error;
}
//5、写dumpPriority
_aidl_ret_status = _aidl_data.writeInt32(dumpPriority);
if (((_aidl_ret_status) != (::android::OK))) {
goto _aidl_error;
}
//6、借助BpBinder(0)-transact来发起binder通信
_aidl_ret_status = remote()->transact(BnServiceManager::TRANSACTION_addService, _aidl_data, &_aidl_reply, 0);
if (UNLIKELY(_aidl_ret_status == ::android::UNKNOWN_TRANSACTION && IServiceManager::getDefaultImpl())) {
return IServiceManager::getDefaultImpl()->addService(name, service, allowIsolated, dumpPriority);
}
if (((_aidl_ret_status) != (::android::OK))) {
goto _aidl_error;
}
//7、如果有返回值就从这个parcel包里读
_aidl_ret_status = _aidl_status.readFromParcel(_aidl_reply);
if (((_aidl_ret_status) != (::android::OK))) {
goto _aidl_error;
}
if (!_aidl_status.isOk()) {
return _aidl_status;
}
_aidl_error:
_aidl_status.setFromStatusT(_aidl_ret_status);
return _aidl_status;
}
对比下Android 10这部分的代码:
virtual status_t addService(const String16& name, const sp<IBinder>& service,
bool allowIsolated, int dumpsysPriority) {
Parcel data, reply;
data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor());
data.writeString16(name);
data.writeStrongBinder(service);
data.writeInt32(allowIsolated ? 1 : 0);
data.writeInt32(dumpsysPriority);
status_t err = remote()->transact(ADD_SERVICE_TRANSACTION, data, &reply);
return err == NO_ERROR ? reply.readExceptionCode() : err;
}
执行的逻辑其实是一样的。
回到我们要讨论的重点,
到这一步我们基于上述分析后,应该可以回答这两个问题:
-
BpServiceManager的addService把请求数据打包成了data,传给了BpBinder的transact函数,它是不是把通信的工作交给了BpBinder?
-
BpServiceManager的addService函数是不是业务层的函数?
这两个问题的答案是肯定的。
阶段总结
基于上述的分析,我们可以总结:
业务层的工作就是把请求信息打包,然后交给通信层处理。
transact
//frameworks/native/libs/binder/BpBinder.cpp
// NOLINTNEXTLINE(google-default-arguments)
status_t BpBinder::transact(
uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
// Once a binder has died, it will never come back to life.
if (mAlive) {
bool privateVendor = flags & FLAG_PRIVATE_VENDOR;
// don't send userspace flags to the kernel
flags = flags & ~static_cast<uint32_t>(FLAG_PRIVATE_VENDOR);
// user transactions require a given stability level
if (code >= FIRST_CALL_TRANSACTION && code <= LAST_CALL_TRANSACTION) {
using android::internal::Stability;
int16_t stability = Stability::getRepr(this);
Stability::Level required = privateVendor ? Stability::VENDOR
: Stability::getLocalLevel();
if (CC_UNLIKELY(!Stability::check(stability, required))) {
ALOGE("Cannot do a user transaction on a %s binder (%s) in a %s context.",
Stability::levelString(stability).c_str(),
String8(getInterfaceDescriptor()).c_str(),
Stability::levelString(required).c_str());
return BAD_TYPE;
}
}
status_t status;
if (CC_UNLIKELY(isRpcBinder())) {
status = rpcSession()->transact(sp<IBinder>::fromExisting(this), code, data, reply,
flags);
} else {
if constexpr (!kEnableKernelIpc) {
LOG_ALWAYS_FATAL("Binder kernel driver disabled at build time");
return INVALID_OPERATION;
}
//BpBinder在这里把transact的工作交给了IPCThreadState::self(),mHandle也是参数之一。
status = IPCThreadState::self()->transact(binderHandle(), code, data, reply, flags);
}
if (data.dataSize() > LOG_TRANSACTIONS_OVER_SIZE) {
Mutex::Autolock _l(mLock);
ALOGW("Large outgoing transaction of %zu bytes, interface descriptor %s, code %d",
data.dataSize(), String8(mDescriptorCache).c_str(), code);
}
if (status == DEAD_OBJECT) mAlive = 0;
return status;
}
return DEAD_OBJECT;
}
看下binderHandle里面是不是handle:
//frameworks/native/libs/binder/BpBinder.cpp
int32_t BpBinder::binderHandle() const {
return std::get<BinderHandle>(mHandle).handle;
}
确实是的,估计也是用来标记transact用的。
我们前面见到过IPCThreadState,这里又出现了,盲猜它和Binder通信有关。下面就来分析分析它。
//frameworks/native/libs/binder/IPCThreadState.cpp
IPCThreadState* IPCThreadState::self()
{
if (gHaveTLS.load(std::memory_order_acquire)) {
restart:
//TLS是Tread Local Storage的简称,即线程本地存储
const pthread_key_t k = gTLS;
//通过pthread_getspecific获取线程独有的存储空间
IPCThreadState* st = (IPCThreadState*)pthread_getspecific(k);
if (st) return st;
return new IPCThreadState;
}
// Racey, heuristic test for simultaneous shutdown.
if (gShutdown.load(std::memory_order_relaxed)) {
ALOGW("Calling IPCThreadState::self() during shutdown is dangerous, expect a crash.\n");
return nullptr;
}
pthread_mutex_lock(&gTLSMutex);
if (!gHaveTLS.load(std::memory_order_relaxed)) {
int key_create_value = pthread_key_create(&gTLS, threadDestructor);
if (key_create_value != 0) {
pthread_mutex_unlock(&gTLSMutex);
ALOGW("IPCThreadState::self() unable to create TLS key, expect a crash: %s\n",
strerror(key_create_value));
return nullptr;
}
gHaveTLS.store(true, std::memory_order_release);
}
pthread_mutex_unlock(&gTLSMutex);
goto restart;
}
看下IPCThreadState的构造函数:
//frameworks/native/libs/binder/IPCThreadState.cpp
IPCThreadState::IPCThreadState()
: mProcess(ProcessState::self()),
mServingStackPointer(nullptr),
mServingStackPointerGuard(nullptr),
mWorkSource(kUnsetWorkSource),
mPropagateWorkSource(false),
mIsLooper(false),
mIsFlushing(false),
mStrictModePolicy(0),
mLastTransactionBinderFlags(0),
mCallRestriction(mProcess->mCallRestriction) {
//把自己作为线程本地变量存储进去,也就是说它是线程特有的,别的线程拿不到它
pthread_setspecific(gTLS, this);
clearCaller();
mHasExplicitIdentity = false;
//mIn和mOut是两个Parcel。可以把它们看作是发送和接收的缓冲区
mIn.setDataCapacity(256);
mOut.setDataCapacity(256);
}
通过pthread_setspecific方法把线程私有变量设置进去。
每个线程都有一个IPCThreadState,而每个IPCThreadState都有一个mIn和一个mOut。mIn是用来接收Binder设备的数据的,mOut是用来存放发送给Binder设备的数据的。
IPCThreadState::transact
status_t IPCThreadState::transact(int32_t handle,
uint32_t code, const Parcel& data,
Parcel* reply, uint32_t flags)
{
LOG_ALWAYS_FATAL_IF(data.isForRpc(), "Parcel constructed for RPC, but being used with binder.");
status_t err;
flags |= TF_ACCEPT_FDS;
IF_LOG_TRANSACTIONS() {
std::ostringstream logStream;
logStream << "BC_TRANSACTION thr " << (void*)pthread_self() << " / hand " << handle
<< " / code " << TypeCode(code) << ": \t" << data << "\n";
std::string message = logStream.str();
ALOGI("%s", message.c_str());
}
LOG_ONEWAY(">>>> SEND from pid %d uid %d %s", getpid(), getuid(),
(flags & TF_ONE_WAY) == 0 ? "READ REPLY" : "ONE WAY");
//这里的BC_TRANSACTION是应用向binder设备发送消息的消息码,而binder向程序回复消息是BR开头,定义在binder_module.h中。请求码和响应码一一对应,要去Binder驱动才能看理清他们的关系。这里暂时用不到。
err = writeTransactionData(BC_TRANSACTION, flags, handle, code, data, nullptr);
if (err != NO_ERROR) {
if (reply) reply->setError(err);
return (mLastError = err);
}
if ((flags & TF_ONE_WAY) == 0) {
if (mCallRestriction != ProcessState::CallRestriction::NONE) [[unlikely]] {
if (mCallRestriction == ProcessState::CallRestriction::ERROR_IF_NOT_ONEWAY) {
ALOGE("Process making non-oneway call (code: %u) but is restricted.", code);
CallStack::logStack("non-oneway call", CallStack::getCurrent(10).get(),
ANDROID_LOG_ERROR);
} else /* FATAL_IF_NOT_ONEWAY */ {
LOG_ALWAYS_FATAL("Process may not make non-oneway calls (code: %u).", code);
}
}
#if 0
if (code == 4) { // relayout
ALOGI(">>>>>> CALLING transaction 4");
} else {
ALOGI(">>>>>> CALLING transaction %d", code);
}
#endif
//等待回复
if (reply) {
err = waitForResponse(reply);
} else {
Parcel fakeReply;
err = waitForResponse(&fakeReply);
}
#if 0
if (code == 4) { // relayout
ALOGI("<<<<<< RETURNING transaction 4");
} else {
ALOGI("<<<<<< RETURNING transaction %d", code);
}
#endif
IF_LOG_TRANSACTIONS() {
std::ostringstream logStream;
logStream << "BR_REPLY thr " << (void*)pthread_self() << " / hand " << handle << ": ";
if (reply)
logStream << "\t" << *reply << "\n";
else
logStream << "(none requested)"
<< "\n";
std::string message = logStream.str();
ALOGI("%s", message.c_str());
}
} else {
//等待回复
err = waitForResponse(nullptr, nullptr);
}
return err;
}
这里的流程很明显的看到通信的痕迹了,发送请求,等待回复。
但是writeTransactionData中的参数handle是做什么用的呢?
看下writeTransactionData的实现:
status_t IPCThreadState::writeTransactionData(int32_t cmd, uint32_t binderFlags,
int32_t handle, uint32_t code, const Parcel& data, status_t* statusBuffer)
{
//用于个binder设备通信的数据结构
binder_transaction_data tr;
tr.target.ptr = 0; /* Don't pass uninitialized stack data to a remote process */
//handle的值传递给target,用来表示目的端,而0就是ServiceManager的标志。
tr.target.handle = handle;
//code是消息码
tr.code = code;
tr.flags = binderFlags;
tr.cookie = 0;
tr.sender_pid = 0;
tr.sender_euid = 0;
const status_t err = data.errorCheck();
if (err == NO_ERROR) {
tr.data_size = data.ipcDataSize();
tr.data.ptr.buffer = data.ipcData();
tr.offsets_size = data.ipcObjectsCount()*sizeof(binder_size_t);
tr.data.ptr.offsets = data.ipcObjects();
} else if (statusBuffer) {
tr.flags |= TF_STATUS_CODE;
*statusBuffer = err;
tr.data_size = sizeof(status_t);
tr.data.ptr.buffer = reinterpret_cast<uintptr_t>(statusBuffer);
tr.offsets_size = 0;
tr.data.ptr.offsets = 0;
} else {
return (mLastError = err);
}
//把命令写到mOout中
mOut.writeInt32(cmd);
mOut.write(&tr, sizeof(tr));
return NO_ERROR;
}
到这里已经把addService方法中的请求信息写到mOut中了。
接着看下waitForResponse:
//frameworks/native/libs/binder/IPCThreadState.cpp
status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult)
{
uint32_t cmd;
int32_t err;
while (1) {
//talkWithDriver,重点
if ((err=talkWithDriver()) < NO_ERROR) break;
err = mIn.errorCheck();
if (err < NO_ERROR) break;
if (mIn.dataAvail() == 0) continue;
cmd = (uint32_t)mIn.readInt32();
IF_LOG_COMMANDS() {
std::ostringstream logStream;
logStream << "Processing waitForResponse Command: " << getReturnString(cmd) << "\n";
std::string message = logStream.str();
ALOGI("%s", message.c_str());
}
switch (cmd) {
case BR_ONEWAY_SPAM_SUSPECT:
ALOGE("Process seems to be sending too many oneway calls.");
CallStack::logStack("oneway spamming", CallStack::getCurrent().get(),
ANDROID_LOG_ERROR);
[[fallthrough]];
case BR_TRANSACTION_COMPLETE:
if (!reply && !acquireResult) goto finish;
break;
case BR_TRANSACTION_PENDING_FROZEN:
ALOGW("Sending oneway calls to frozen process.");
goto finish;
case BR_DEAD_REPLY:
err = DEAD_OBJECT;
goto finish;
case BR_FAILED_REPLY:
err = FAILED_TRANSACTION;
goto finish;
case BR_FROZEN_REPLY:
err = FAILED_TRANSACTION;
goto finish;
case BR_ACQUIRE_RESULT:
{
ALOG_ASSERT(acquireResult != NULL, "Unexpected brACQUIRE_RESULT");
const int32_t result = mIn.readInt32();
if (!acquireResult) continue;
*acquireResult = result ? NO_ERROR : INVALID_OPERATION;
}
goto finish;
case BR_REPLY:
{
binder_transaction_data tr;
err = mIn.read(&tr, sizeof(tr));
ALOG_ASSERT(err == NO_ERROR, "Not enough command data for brREPLY");
if (err != NO_ERROR) goto finish;
if (reply) {
if ((tr.flags & TF_STATUS_CODE) == 0) {
reply->ipcSetDataReference(
reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
tr.data_size,
reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),
tr.offsets_size/sizeof(binder_size_t),
freeBuffer);
} else {
err = *reinterpret_cast<const status_t*>(tr.data.ptr.buffer);
freeBuffer(reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
tr.data_size,
reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),
tr.offsets_size / sizeof(binder_size_t));
}
} else {
freeBuffer(reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer), tr.data_size,
reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),
tr.offsets_size / sizeof(binder_size_t));
continue;
}
}
goto finish;
default:
//执行命令
err = executeCommand(cmd);
if (err != NO_ERROR) goto finish;
break;
}
}
finish:
if (err != NO_ERROR) {
if (acquireResult) *acquireResult = err;
if (reply) reply->setError(err);
mLastError = err;
logExtendedError();
}
return err;
}
这里很多函数需要我们关注的,先看看default的executeCommand:
status_t IPCThreadState::executeCommand(int32_t cmd)
{
BBinder* obj;
RefBase::weakref_type* refs;
status_t result = NO_ERROR;
switch ((uint32_t)cmd) {
case BR_ERROR:
result = mIn.readInt32();
break;
case BR_OK:
break;
case BR_ACQUIRE:
refs = (RefBase::weakref_type*)mIn.readPointer();
obj = (BBinder*)mIn.readPointer();
ALOG_ASSERT(refs->refBase() == obj,
"BR_ACQUIRE: object %p does not match cookie %p (expected %p)",
refs, obj, refs->refBase());
obj->incStrong(mProcess.get());
IF_LOG_REMOTEREFS() {
LOG_REMOTEREFS("BR_ACQUIRE from driver on %p", obj);
obj->printRefs();
}
mOut.writeInt32(BC_ACQUIRE_DONE);
mOut.writePointer((uintptr_t)refs);
mOut.writePointer((uintptr_t)obj);
break;
case BR_RELEASE:
refs = (RefBase::weakref_type*)mIn.readPointer();
obj = (BBinder*)mIn.readPointer();
ALOG_ASSERT(refs->refBase() == obj,
"BR_RELEASE: object %p does not match cookie %p (expected %p)",
refs, obj, refs->refBase());
IF_LOG_REMOTEREFS() {
LOG_REMOTEREFS("BR_RELEASE from driver on %p", obj);
obj->printRefs();
}
mPendingStrongDerefs.push(obj);
break;
case BR_INCREFS:
refs = (RefBase::weakref_type*)mIn.readPointer();
obj = (BBinder*)mIn.readPointer();
refs->incWeak(mProcess.get());
mOut.writeInt32(BC_INCREFS_DONE);
mOut.writePointer((uintptr_t)refs);
mOut.writePointer((uintptr_t)obj);
break;
case BR_DECREFS:
refs = (RefBase::weakref_type*)mIn.readPointer();
// NOLINTNEXTLINE(clang-analyzer-deadcode.DeadStores)
obj = (BBinder*)mIn.readPointer(); // consume
// NOTE: This assertion is not valid, because the object may no
// longer exist (thus the (BBinder*)cast above resulting in a different
// memory address).
//ALOG_ASSERT(refs->refBase() == obj,
// "BR_DECREFS: object %p does not match cookie %p (expected %p)",
// refs, obj, refs->refBase());
mPendingWeakDerefs.push(refs);
break;
case BR_ATTEMPT_ACQUIRE:
refs = (RefBase::weakref_type*)mIn.readPointer();
obj = (BBinder*)mIn.readPointer();
{
const bool success = refs->attemptIncStrong(mProcess.get());
ALOG_ASSERT(success && refs->refBase() == obj,
"BR_ATTEMPT_ACQUIRE: object %p does not match cookie %p (expected %p)",
refs, obj, refs->refBase());
mOut.writeInt32(BC_ACQUIRE_RESULT);
mOut.writeInt32((int32_t)success);
}
break;
case BR_TRANSACTION_SEC_CTX:
case BR_TRANSACTION:
{
binder_transaction_data_secctx tr_secctx;
binder_transaction_data& tr = tr_secctx.transaction_data;
if (cmd == (int) BR_TRANSACTION_SEC_CTX) {
result = mIn.read(&tr_secctx, sizeof(tr_secctx));
} else {
result = mIn.read(&tr, sizeof(tr));
tr_secctx.secctx = 0;
}
ALOG_ASSERT(result == NO_ERROR,
"Not enough command data for brTRANSACTION");
if (result != NO_ERROR) break;
Parcel buffer;
buffer.ipcSetDataReference(
reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
tr.data_size,
reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),
tr.offsets_size/sizeof(binder_size_t), freeBuffer);
const void* origServingStackPointer = mServingStackPointer;
mServingStackPointer = __builtin_frame_address(0);
const pid_t origPid = mCallingPid;
const char* origSid = mCallingSid;
const uid_t origUid = mCallingUid;
const bool origHasExplicitIdentity = mHasExplicitIdentity;
const int32_t origStrictModePolicy = mStrictModePolicy;
const int32_t origTransactionBinderFlags = mLastTransactionBinderFlags;
const int32_t origWorkSource = mWorkSource;
const bool origPropagateWorkSet = mPropagateWorkSource;
// Calling work source will be set by Parcel#enforceInterface. Parcel#enforceInterface
// is only guaranteed to be called for AIDL-generated stubs so we reset the work source
// here to never propagate it.
clearCallingWorkSource();
clearPropagateWorkSource();
mCallingPid = tr.sender_pid;
mCallingSid = reinterpret_cast<const char*>(tr_secctx.secctx);
mCallingUid = tr.sender_euid;
mHasExplicitIdentity = false;
mLastTransactionBinderFlags = tr.flags;
// ALOGI(">>>> TRANSACT from pid %d sid %s uid %d\n", mCallingPid,
// (mCallingSid ? mCallingSid : "<N/A>"), mCallingUid);
Parcel reply;
status_t error;
IF_LOG_TRANSACTIONS() {
std::ostringstream logStream;
logStream << "BR_TRANSACTION thr " << (void*)pthread_self() << " / obj "
<< tr.target.ptr << " / code " << TypeCode(tr.code) << ": \t" << buffer
<< "\n"
<< "Data addr = " << reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer)
<< ", offsets addr="
<< reinterpret_cast<const size_t*>(tr.data.ptr.offsets) << "\n";
std::string message = logStream.str();
ALOGI("%s", message.c_str());
}
if (tr.target.ptr) {
// We only have a weak reference on the target object, so we must first try to
// safely acquire a strong reference before doing anything else with it.
if (reinterpret_cast<RefBase::weakref_type*>(
tr.target.ptr)->attemptIncStrong(this)) {
error = reinterpret_cast<BBinder*>(tr.cookie)->transact(tr.code, buffer,
&reply, tr.flags);
reinterpret_cast<BBinder*>(tr.cookie)->decStrong(this);
} else {
error = UNKNOWN_TRANSACTION;
}
} else {
error = the_context_object->transact(tr.code, buffer, &reply, tr.flags);
}
//ALOGI("<<<< TRANSACT from pid %d restore pid %d sid %s uid %d\n",
// mCallingPid, origPid, (origSid ? origSid : "<N/A>"), origUid);
if ((tr.flags & TF_ONE_WAY) == 0) {
LOG_ONEWAY("Sending reply to %d!", mCallingPid);
if (error < NO_ERROR) reply.setError(error);
// b/238777741: clear buffer before we send the reply.
// Otherwise, there is a race where the client may
// receive the reply and send another transaction
// here and the space used by this transaction won't
// be freed for the client.
buffer.setDataSize(0);
constexpr uint32_t kForwardReplyFlags = TF_CLEAR_BUF;
sendReply(reply, (tr.flags & kForwardReplyFlags));
} else {
if (error != OK) {
std::ostringstream logStream;
logStream << "oneway function results for code " << tr.code << " on binder at "
<< reinterpret_cast<void*>(tr.target.ptr)
<< " will be dropped but finished with status "
<< statusToString(error);
// ideally we could log this even when error == OK, but it
// causes too much logspam because some manually-written
// interfaces have clients that call methods which always
// write results, sometimes as oneway methods.
if (reply.dataSize() != 0) {
logStream << " and reply parcel size " << reply.dataSize();
}
std::string message = logStream.str();
ALOGI("%s", message.c_str());
}
LOG_ONEWAY("NOT sending reply to %d!", mCallingPid);
}
mServingStackPointer = origServingStackPointer;
mCallingPid = origPid;
mCallingSid = origSid;
mCallingUid = origUid;
mHasExplicitIdentity = origHasExplicitIdentity;
mStrictModePolicy = origStrictModePolicy;
mLastTransactionBinderFlags = origTransactionBinderFlags;
mWorkSource = origWorkSource;
mPropagateWorkSource = origPropagateWorkSet;
IF_LOG_TRANSACTIONS() {
std::ostringstream logStream;
logStream << "BC_REPLY thr " << (void*)pthread_self() << " / obj " << tr.target.ptr
<< ": \t" << reply << "\n";
std::string message = logStream.str();
ALOGI("%s", message.c_str());
}
}
break;
//这里收到Binder驱动发来的service挂掉的消息
case BR_DEAD_BINDER:
{
BpBinder *proxy = (BpBinder*)mIn.readPointer();
proxy->sendObituary();
mOut.writeInt32(BC_DEAD_BINDER_DONE);
mOut.writePointer((uintptr_t)proxy);
} break;
case BR_CLEAR_DEATH_NOTIFICATION_DONE:
{
BpBinder *proxy = (BpBinder*)mIn.readPointer();
proxy->getWeakRefs()->decWeak(proxy);
} break;
case BR_FINISHED:
result = TIMED_OUT;
break;
case BR_NOOP:
break;
//这里收到来自驱动的指示来创建一个新线程。用来和Binder通信
case BR_SPAWN_LOOPER:
mProcess->spawnPooledThread(false);
break;
default:
ALOGE("*** BAD COMMAND %d received from Binder driver\n", cmd);
result = UNKNOWN_ERROR;
break;
}
if (result != NO_ERROR) {
mLastError = result;
}
return result;
}
这里实现了binder回复消息的处理。
那么与binder设备交互是哪里实现的呢?
来看看talkWithDriver:
talkWithDriver
status_t IPCThreadState::talkWithDriver(bool doReceive)
{
if (mProcess->mDriverFD < 0) {
return -EBADF;
}
//用来与binder设备交互数据的结构体
binder_write_read bwr;
// Is the read buffer empty?
const bool needRead = mIn.dataPosition() >= mIn.dataSize();
// We don't want to write anything if we are still reading
// from data left in the input buffer and the caller
// has requested to read the next data.
const size_t outAvail = (!doReceive || needRead) ? mOut.dataSize() : 0;
//填充请求命令
bwr.write_size = outAvail;
bwr.write_buffer = (uintptr_t)mOut.data();
// This is what we'll read.
if (doReceive && needRead) {
//填充接受数据缓冲区信息,如果接收到数据,就填充到mIn中。
bwr.read_size = mIn.dataCapacity();
bwr.read_buffer = (uintptr_t)mIn.data();
} else {
bwr.read_size = 0;
bwr.read_buffer = 0;
}
IF_LOG_COMMANDS() {
std::ostringstream logStream;
if (outAvail != 0) {
logStream << "Sending commands to driver: ";
const void* cmds = (const void*)bwr.write_buffer;
const void* end = ((const uint8_t*)cmds) + bwr.write_size;
logStream << "\t" << HexDump(cmds, bwr.write_size) << "\n";
while (cmds < end) cmds = printCommand(logStream, cmds);
}
logStream << "Size of receive buffer: " << bwr.read_size << ", needRead: " << needRead
<< ", doReceive: " << doReceive << "\n";
std::string message = logStream.str();
ALOGI("%s", message.c_str());
}
// Return immediately if there is nothing to do.
if ((bwr.write_size == 0) && (bwr.read_size == 0)) return NO_ERROR;
bwr.write_consumed = 0;
bwr.read_consumed = 0;
status_t err;
do {
IF_LOG_COMMANDS() {
std::ostringstream logStream;
logStream << "About to read/write, write size = " << mOut.dataSize() << "\n";
std::string message = logStream.str();
ALOGI("%s", message.c_str());
}
#if defined(__ANDROID__)
//不是read/write调用,而是ioctl方式
if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0)
err = NO_ERROR;
else
err = -errno;
#else
err = INVALID_OPERATION;
#endif
if (mProcess->mDriverFD < 0) {
err = -EBADF;
}
IF_LOG_COMMANDS() {
std::ostringstream logStream;
logStream << "Finished read/write, write size = " << mOut.dataSize() << "\n";
std::string message = logStream.str();
ALOGI("%s", message.c_str());
}
} while (err == -EINTR);
IF_LOG_COMMANDS() {
std::ostringstream logStream;
logStream << "Our err: " << (void*)(intptr_t)err
<< ", write consumed: " << bwr.write_consumed << " (of " << mOut.dataSize()
<< "), read consumed: " << bwr.read_consumed << "\n";
std::string message = logStream.str();
ALOGI("%s", message.c_str());
}
if (err >= NO_ERROR) {
if (bwr.write_consumed > 0) {
if (bwr.write_consumed < mOut.dataSize())
LOG_ALWAYS_FATAL("Driver did not consume write buffer. "
"err: %s consumed: %zu of %zu",
statusToString(err).c_str(),
(size_t)bwr.write_consumed,
mOut.dataSize());
else {
mOut.setDataSize(0);
processPostWriteDerefs();
}
}
if (bwr.read_consumed > 0) {
mIn.setDataSize(bwr.read_consumed);
mIn.setDataPosition(0);
}
IF_LOG_COMMANDS() {
std::ostringstream logStream;
logStream << "Remaining data size: " << mOut.dataSize() << "\n";
logStream << "Received commands from driver: ";
const void* cmds = mIn.data();
const void* end = mIn.data() + mIn.dataSize();
logStream << "\t" << HexDump(cmds, mIn.dataSize()) << "\n";
while (cmds < end) cmds = printReturnCommand(logStream, cmds);
std::string message = logStream.str();
ALOGI("%s", message.c_str());
}
return NO_ERROR;
}
ALOGE_IF(mProcess->mDriverFD >= 0,
"Driver returned error (%s). This is a bug in either libbinder or the driver. This "
"thread's connection to %s will no longer work.",
statusToString(err).c_str(), mProcess->mDriverName.c_str());
return err;
}
这就是和binder设备交互的函数。
到这里,我们应该已经对MediaPlayerService的注册过程有了较为整体的认知了。
下面还剩下两个函数。
startThreadPool和joinThreadPool
下面接着分析这两个函数的实现。
startThreadPool
ProcessState::self()->startThreadPool();
//frameworks/native/libs/binder/ProcessState.cpp
void ProcessState::startThreadPool()
{
std::unique_lock<std::mutex> _l(mLock);
if (!mThreadPoolStarted) {
if (mMaxThreads == 0) {
// see also getThreadPoolMaxTotalThreadCount
ALOGW("Extra binder thread started, but 0 threads requested. Do not use "
"*startThreadPool when zero threads are requested.");
}
mThreadPoolStarted = true;
spawnPooledThread(true);
}
}
这里执行了spawnPooledThread,参数赋值为true。
void ProcessState::spawnPooledThread(bool isMain)
{
if (mThreadPoolStarted) {
String8 name = makeBinderThreadName();
ALOGV("Spawning new pooled thread, name=%s\n", name.c_str());
//新建了线程PoolThread
sp<Thread> t = sp<PoolThread>::make(isMain);
t->run(name.c_str());
pthread_mutex_lock(&mThreadCountLock);
mKernelStartedThreads++;
pthread_mutex_unlock(&mThreadCountLock);
}
// TODO: if startThreadPool is called on another thread after the process
// starts up, the kernel might think that it already requested those
// binder threads, and additional won't be started. This is likely to
// cause deadlocks, and it will also cause getThreadPoolMaxTotalThreadCount
// to return too high of a value.
}
spawnPooledThread的参数是isMain,而前面传入的是true,也就是说startThreadPool是在主线程被调用的。
这里面新建线程了PoolThread,看下它的实现类:
//frameworks/native/libs/binder/ProcessState.cpp
class PoolThread : public Thread
{
public:
explicit PoolThread(bool isMain)
: mIsMain(isMain)
{
}
protected:
virtual bool threadLoop()
{
//在这个新线程中又创建一个IPCThreadState
IPCThreadState::self()->joinThreadPool(mIsMain);
return false;
}
const bool mIsMain;
};
PoolThread新建线程并执行了joinThreadPool函数,从名字看起来是又新建了一个线程。接下来看下它的实现:
joinThreadPool
//frameworks/native/libs/binder/IPCThreadState.cpp
void IPCThreadState::joinThreadPool(bool isMain)
{
LOG_THREADPOOL("**** THREAD %p (PID %d) IS JOINING THE THREAD POOL\n", (void*)pthread_self(), getpid());
pthread_mutex_lock(&mProcess->mThreadCountLock);
mProcess->mCurrentThreads++;
pthread_mutex_unlock(&mProcess->mThreadCountLock);
//如果isMain=true,我们需要循环处理,把请求信息写到mOut中,后续发出去
mOut.writeInt32(isMain ? BC_ENTER_LOOPER : BC_REGISTER_LOOPER);
mIsLooper = true;
status_t result;
do {
processPendingDerefs();
// now get the next command to be processed, waiting if necessary
//处理消息
result = getAndExecuteCommand();
if (result < NO_ERROR && result != TIMED_OUT && result != -ECONNREFUSED && result != -EBADF) {
LOG_ALWAYS_FATAL("getAndExecuteCommand(fd=%d) returned unexpected error %d, aborting",
mProcess->mDriverFD, result);
}
// Let this thread exit the thread pool if it is no longer
// needed and it is not the main process thread.
if(result == TIMED_OUT && !isMain) {
break;
}
} while (result != -ECONNREFUSED && result != -EBADF);
LOG_THREADPOOL("**** THREAD %p (PID %d) IS LEAVING THE THREAD POOL err=%d\n",
(void*)pthread_self(), getpid(), result);
mOut.writeInt32(BC_EXIT_LOOPER);
mIsLooper = false;
talkWithDriver(false);
pthread_mutex_lock(&mProcess->mThreadCountLock);
LOG_ALWAYS_FATAL_IF(mProcess->mCurrentThreads == 0,
"Threadpool thread count = 0. Thread cannot exist and exit in empty "
"threadpool\n"
"Misconfiguration. Increase threadpool max threads configuration\n");
mProcess->mCurrentThreads--;
pthread_mutex_unlock(&mProcess->mThreadCountLock);
}
进去看看getAndExecuteCommand():
//frameworks/native/libs/binder/IPCThreadState.cpp
status_t IPCThreadState::getAndExecuteCommand()
{
status_t result;
int32_t cmd;
//执行了talkWithDriver
result = talkWithDriver();
if (result >= NO_ERROR) {
size_t IN = mIn.dataAvail();
if (IN < sizeof(int32_t)) return result;
cmd = mIn.readInt32();
IF_LOG_COMMANDS() {
std::ostringstream logStream;
logStream << "Processing top-level Command: " << getReturnString(cmd) << "\n";
std::string message = logStream.str();
ALOGI("%s", message.c_str());
}
pthread_mutex_lock(&mProcess->mThreadCountLock);
mProcess->mExecutingThreadsCount++;
if (mProcess->mExecutingThreadsCount >= mProcess->mMaxThreads &&
mProcess->mStarvationStartTimeMs == 0) {
mProcess->mStarvationStartTimeMs = uptimeMillis();
}
pthread_mutex_unlock(&mProcess->mThreadCountLock);
result = executeCommand(cmd);
pthread_mutex_lock(&mProcess->mThreadCountLock);
mProcess->mExecutingThreadsCount--;
if (mProcess->mExecutingThreadsCount < mProcess->mMaxThreads &&
mProcess->mStarvationStartTimeMs != 0) {
int64_t starvationTimeMs = uptimeMillis() - mProcess->mStarvationStartTimeMs;
if (starvationTimeMs > 100) {
ALOGE("binder thread pool (%zu threads) starved for %" PRId64 " ms",
mProcess->mMaxThreads, starvationTimeMs);
}
mProcess->mStarvationStartTimeMs = 0;
}
// Cond broadcast can be expensive, so don't send it every time a binder
// call is processed. b/168806193
if (mProcess->mWaitingForThreads > 0) {
pthread_cond_broadcast(&mProcess->mThreadCountDecrement);
}
pthread_mutex_unlock(&mProcess->mThreadCountLock);
}
return result;
}
这里面执行了talkWithDriver(),也就是说joinThreadPool()函数执行了talkWithDriver()。
而在最前面的main()函数中,它先后执行了startThreadPool和joinThreadPool:
ProcessState::self()->startThreadPool();
IPCThreadState::self()->joinThreadPool();
下面基于上述分析进行总结:
阶段总结
startThreadPool和joinThreadPool都在执行talkWithDriver(),希望在binder设备中找点事情干干。
那么:
Q:那么这里面有多少线程在执行呢?
A:startThreadPool中通过joinThreadPool()新建了线程读取binder设备,查看是否有请求。主线程也调用了joinThreadPool()查看是否有请求。所以目前看来是两个。
这里通过侧面可以看出来binder设备是支持多线程操作的。上述代码里面我们也到处可见有加锁的操作。
总结
我们通过MediaServer为入口,分析了Binder机制。
总览全文,我们知道Binder是个通信机制,既然是个通信机制,那么当然也可以使用别的IPC机制。但是既然Android选用了Binder机制,肯定有它的优点。
Binder机制很复杂,它在Android源码里面层层封装,把通信和业务融合在一起了。
把总体简化和抽象,可以得出下面这样的关系图:
参考
《深入理解Android:卷1》
Android 12 系统源码分析 | Native Binder 代码变迁 - 知乎 (zhihu.com)
原文地址:https://blog.csdn.net/Shujie_L/article/details/137200393
免责声明:本站文章内容转载自网络资源,如本站内容侵犯了原著者的合法权益,可联系本站删除。更多内容请关注自学内容网(zxcms.com)!