[Android 13]Binder系列--获取服务

获取服务
hongxi.zhu 2023-7-8

一、客户端发起获取服务

SurfaceFlinger进程中获取InputFlinger服务为例
frameworks/native/services/surfaceflinger/SurfaceFlinger.cpp

    sp<IBinder> input(defaultServiceManager()->getService(String16("inputflinger")));static_cast<void>(mScheduler->schedule([=] {if (input == nullptr) {ALOGE("Failed to link to input service");} else {mInputFlinger = interface_cast<os::IInputFlinger>(input);}

二、获取ServiceManager

获取到SM的代理对象之前的文章已经分析过,请参考前面的:Binder系列–获取ServiceManager

三、向ServiceManager获取服务

BpServiceManager::getService

::android::binder::Status BpServiceManager::getService(const ::std::string& name, ::android::sp<::android::IBinder>* _aidl_return) {::android::Parcel _aidl_data;_aidl_data.markForBinder(remoteStrong());::android::Parcel _aidl_reply;::android::status_t _aidl_ret_status = ::android::OK;::android::binder::Status _aidl_status;_aidl_ret_status = _aidl_data.writeInterfaceToken(getInterfaceDescriptor());  //服务接口描述_aidl_ret_status = _aidl_data.writeUtf8AsUtf16(name);  //服务名称_aidl_ret_status = remote()->transact(BnServiceManager::TRANSACTION_getService, _aidl_data, &_aidl_reply, 0);//reply_aidl_ret_status = _aidl_status.readFromParcel(_aidl_reply);//return the service bpbinder_aidl_ret_status = _aidl_reply.readNullableStrongBinder(_aidl_return);...return _aidl_status;
}

remote()从前面可知是SM的BpBinder对象,获取到SM的BpBinder对象后,调用它的getService方法发起跨进程请求,向ServiceManager进程获取我们的目标服务

BpBinder::transact

// NOLINTNEXTLINE(google-default-arguments)
status_t BpBinder::transact(uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{// Once a binder has died, it will never come back to life.if (mAlive) {bool privateVendor = flags & FLAG_PRIVATE_VENDOR;// don't send userspace flags to the kernelflags = flags & ~FLAG_PRIVATE_VENDOR;...status_t status;if (CC_UNLIKELY(isRpcBinder())) { //RpcBinder(socket)...} else {  //traditional binderstatus = IPCThreadState::self()->transact(binderHandle(), code, data, reply, flags);  //调用ipcthreadstate的transact向驱动交互, binderHandle() = 0}...return status;}return DEAD_OBJECT;
}

调用ipcthreadstate的transact向驱动交互

IPCThreadState::transact

status_t IPCThreadState::transact(int32_t handle,uint32_t code, const Parcel& data,Parcel* reply, uint32_t flags)
{LOG_ALWAYS_FATAL_IF(data.isForRpc(), "Parcel constructed for RPC, but being used with binder.");status_t err;flags |= TF_ACCEPT_FDS;err = writeTransactionData(BC_TRANSACTION, flags, handle, code, data, nullptr);  //将数据写入mOut缓冲区,交给驱动读取 cmd = BC_TRANSACTIONif ((flags & TF_ONE_WAY) == 0) {  //同步请求...if (reply) {err = waitForResponse(reply);  //向驱动写并等待响应} else {Parcel fakeReply;err = waitForResponse(&fakeReply);}} else {err = waitForResponse(nullptr, nullptr);  //oneway的方式不等待驱动的响应直接返回}return err;
}
  1. 通过writeTransactionData,构建binder_transaction_data结构体,并写入mOut缓冲区
  2. 向驱动读写并阻塞等待驱动返回

IPCThreadState::writeTransactionData

status_t IPCThreadState::writeTransactionData(int32_t cmd, uint32_t binderFlags,int32_t handle, uint32_t code, const Parcel& data, status_t* statusBuffer)
{binder_transaction_data tr;tr.target.ptr = 0; /* Don't pass uninitialized stack data to a remote process */tr.target.handle = handle;tr.code = code;tr.flags = binderFlags;tr.cookie = 0;tr.sender_pid = 0;tr.sender_euid = 0;const status_t err = data.errorCheck();if (err == NO_ERROR) {tr.data_size = data.ipcDataSize();  //跨进程传输的数据的实际大小tr.data.ptr.buffer = data.ipcData();  //跨进程的数据地址tr.offsets_size = data.ipcObjectsCount()*sizeof(binder_size_t);  //data中包含的对象大小tr.data.ptr.offsets = data.ipcObjects();  //对象的偏移地址} else if (statusBuffer) { //statusBuffer = nullptr//} else {return (mLastError = err);}mOut.writeInt32(cmd);  //cmd = BC_TRANSACTIONmOut.write(&tr, sizeof(tr));  //将tr结构体写入(tr结构体是驱动和用户进程共同使用的数据格式)return NO_ERROR;
}

IPCThreadState::waitForResponse

status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult)
{uint32_t cmd;int32_t err;while (1) {if ((err=talkWithDriver()) < NO_ERROR) break;  //向驱动读写数据//从驱动中返回,读取驱动写给用户进程的数据cmd = (uint32_t)mIn.readInt32();  //获取协议请求码cmdswitch (cmd) {...case BR_REPLY:  //处理驱动给用户进程的BR_REPLY{binder_transaction_data tr;err = mIn.read(&tr, sizeof(tr));ALOG_ASSERT(err == NO_ERROR, "Not enough command data for brREPLY");if (err != NO_ERROR) goto finish;if (reply) {if ((tr.flags & TF_STATUS_CODE) == 0) {reply->ipcSetDataReference(reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),tr.data_size,reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),tr.offsets_size/sizeof(binder_size_t),freeBuffer);} else {err = *reinterpret_cast<const status_t*>(tr.data.ptr.buffer);freeBuffer(nullptr,reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),tr.data_size,reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),tr.offsets_size/sizeof(binder_size_t));}} else {freeBuffer(nullptr,reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),tr.data_size,reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),tr.offsets_size/sizeof(binder_size_t));continue;}}goto finish;default:err = executeCommand(cmd);if (err != NO_ERROR) goto finish;break;}}finish:...return err;
}

IPCThreadState::talkWithDriver()

status_t IPCThreadState::talkWithDriver(bool doReceive)  //doReceive默认为true
{binder_write_read bwr;// Is the read buffer empty?//用户进程已经处理完上一次读取发过来的数据const bool needRead = mIn.dataPosition() >= mIn.dataSize();// We don't want to write anything if we are still reading// from data left in the input buffer and the caller// has requested to read the next data.//只有当用户进程处理完上一次驱动发过来的数据才会去进行下一次向驱动写操作,否则mOut.dataSize = 0const size_t outAvail = (!doReceive || needRead) ? mOut.dataSize() : 0;bwr.write_size = outAvail;bwr.write_buffer = (uintptr_t)mOut.data();//如果需要读,bwr.read_size = mIn.dataCapacity(),  bwr.write_size = 0//如果需要写, bwr.read_size = 0,bwr.write_size = mOut.dataSize()//两者都为0,则不读不写,直接返回// This is what we'll read.if (doReceive && needRead) {bwr.read_size = mIn.dataCapacity();bwr.read_buffer = (uintptr_t)mIn.data();} else {bwr.read_size = 0;bwr.read_buffer = 0;}}// Return immediately if there is nothing to do.if ((bwr.write_size == 0) && (bwr.read_size == 0)) return NO_ERROR;bwr.write_consumed = 0; //驱动已经处理的写操作数据字节数bwr.read_consumed = 0;  //驱动已经处理的读操作数据字节数status_t err;do {...// 通过ioctl向驱动读、写数据// ioctl cmd = BINDER_WRITE_READif (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0)err = NO_ERROR;elseerr = -errno;...} while (err == -EINTR);if (err >= NO_ERROR) {if (bwr.write_consumed > 0) {if (bwr.write_consumed < mOut.dataSize())...else {   //驱动已经处理完我们写入的数据,可以将输出缓冲区重置mOut.setDataSize(0);processPostWriteDerefs();}}if (bwr.read_consumed > 0) {//从驱动读到数据,设置本次需要用户进程处理的字节数为bwr.read_consumedmIn.setDataSize(bwr.read_consumed);//指针从头开始读mIn.setDataPosition(0);}return NO_ERROR;}return err;
}

上面主要是

  1. 构建binder_write_read结构体bwr, 这个结构体是用户进程和驱动交互的数据格式
    在这里插入图片描述
  2. 通过ioctl向驱动写入bwr

三、驱动部分

用户空间调用ioctl系统调用时会调到具体模块的实现中,也就是binder驱动的binder_ioctl中, 此时仍处于请求端用户进程上下文

BC_TRANSACTION

binder_ioctl

static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
{int ret;struct binder_proc *proc = filp->private_data;struct binder_thread *thread;unsigned int size = _IOC_SIZE(cmd);  //获取ioctl cmd void __user *ubuf = (void __user *)arg;thread = binder_get_thread(proc);  //获取当前请求端进程switch (cmd) {case BINDER_WRITE_READ:  //用户进程最主要的行为就是读写操作ret = binder_ioctl_write_read(filp, cmd, arg, thread);if (ret)goto err;break;...}ret = 0;
err:...
}

binder_ioctl_write_read

static int binder_ioctl_write_read(struct file *filp,unsigned int cmd, unsigned long arg,struct binder_thread *thread)
{int ret = 0;struct binder_proc *proc = filp->private_data;unsigned int size = _IOC_SIZE(cmd);void __user *ubuf = (void __user *)arg;struct binder_write_read bwr;//从用户空间拷贝bwr结构体数据if (copy_from_user(&bwr, ubuf, sizeof(bwr))) {ret = -EFAULT;goto out;}if (bwr.write_size > 0) {  //要写ret = binder_thread_write(proc, thread,bwr.write_buffer,bwr.write_size,&bwr.write_consumed);}if (bwr.read_size > 0) {  //要读ret = binder_thread_read(proc, thread, bwr.read_buffer,bwr.read_size,&bwr.read_consumed,filp->f_flags & O_NONBLOCK);trace_binder_read_done(ret);binder_inner_proc_lock(proc);if (!binder_worklist_empty_ilocked(&proc->todo))binder_wakeup_proc_ilocked(proc);binder_inner_proc_unlock(proc);}if (copy_to_user(ubuf, &bwr, sizeof(bwr))) {ret = -EFAULT;goto out;}
out:return ret;
}

将用户进程的数据拷贝到内核空间,并调用binder_thread_write处理

binder_thread_write

static int binder_thread_write(struct binder_proc *proc,struct binder_thread *thread,binder_uintptr_t binder_buffer, size_t size,binder_size_t *consumed)
{uint32_t cmd;struct binder_context *context = proc->context;void __user *buffer = (void __user *)(uintptr_t)binder_buffer;void __user *ptr = buffer + *consumed;void __user *end = buffer + size;while (ptr < end && thread->return_error.cmd == BR_OK) {int ret;//从用户空间缓冲区拷贝cmd请求码到内核空间//这个ptr就是指向用户空间那块数据内存的if (get_user(cmd, (uint32_t __user *)ptr))return -EFAULT;ptr += sizeof(uint32_t); //上面拷贝了cmd,数据指针起始地址要往后偏移,一个cmd大小,就能接着往后读数据switch (cmd) {...case BC_TRANSACTION:  //BC_TRANSACTIONcase BC_REPLY: {struct binder_transaction_data tr;//从用户空间拷贝binder_transaction_data结构体到内核空间//bwr = cmd + tr,因为上面更新了偏移地址,所以这里直接拷贝即可if (copy_from_user(&tr, ptr, sizeof(tr)))  return -EFAULT;ptr += sizeof(tr);  //更新ptr数据起始地址binder_transaction(proc, thread, &tr,cmd == BC_REPLY, 0);break;}...*consumed = ptr - buffer;}return 0;
}

从用户空间拷贝binder_transaction_data到内核空间,然后调用binder_transaction 处理

binder_transaction

static void binder_transaction(struct binder_proc *proc,struct binder_thread *thread,struct binder_transaction_data *tr, int reply,binder_size_t extra_buffers_size)
{int ret;struct binder_transaction *t;struct binder_work *tcomplete;binder_size_t *offp, *off_end, *off_start;binder_size_t off_min;u8 *sg_bufp, *sg_buf_end;struct binder_proc *target_proc = NULL;struct binder_thread *target_thread = NULL;struct binder_node *target_node = NULL;struct binder_transaction *in_reply_to = NULL;struct binder_buffer_object *last_fixup_obj = NULL;binder_size_t last_fixup_min_off = 0;struct binder_context *context = proc->context;int t_debug_id = atomic_inc_return(&binder_last_id);char *secctx = NULL;u32 secctx_sz = 0;if (reply) {  //BC_REPLY...} else {  //BC_TRANSACTIONif (tr->target.handle) { ...} else {  //tr->target.handle = 0mutex_lock(&context->context_mgr_node_lock);target_node = context->binder_context_mgr_node;  //SM服务binder_nodeif (target_node)target_node = binder_get_node_refs_for_txn(target_node, &target_proc,&return_error);elsereturn_error = BR_DEAD_REPLY;mutex_unlock(&context->context_mgr_node_lock);}e->to_node = target_node->debug_id;binder_inner_proc_lock(proc);if (!(tr->flags & TF_ONE_WAY) && thread->transaction_stack) {...}binder_inner_proc_unlock(proc);}/* TODO: reuse incoming transaction for reply */t = kzalloc(sizeof(*t), GFP_KERNEL);  //binder_transactionspin_lock_init(&t->lock);tcomplete = kzalloc(sizeof(*tcomplete), GFP_KERNEL);  //binder_work//往目前进程的是binder_transaction,往当前线程的是binder_work//binder_transaction包含binder_work、binder_buffer等t->debug_id = t_debug_id;if (!reply && !(tr->flags & TF_ONE_WAY))t->from = thread;  //同步请求时记录下当前客户端的请求线程elset->from = NULL;   //oneway的方式不需要记录t->sender_euid = task_euid(proc->tsk);t->to_proc = target_proc;  //SM进程t->to_thread = target_thread;  //target_thread没有指定,为nullt->code = tr->code;  //getServicet->flags = tr->flags;...t->buffer = binder_alloc_new_buf(&target_proc->alloc, tr->data_size,tr->offsets_size, extra_buffers_size,!reply && (t->flags & TF_ONE_WAY));  //从buffer红黑树上查询合适的buffer,找不到就创建并加入红黑树...t->buffer->debug_id = t->debug_id;t->buffer->transaction = t;t->buffer->target_node = target_node;off_start = (binder_size_t *)(t->buffer->data +ALIGN(tr->data_size, sizeof(void *)));offp = off_start;if (copy_from_user(t->buffer->data, (const void __user *)(uintptr_t)tr->data.ptr.buffer, tr->data_size)) {  //将tr结构体中data.ptr.buffer(ipcData)拷贝到内核空间binder_transaction的binder_buffer中}fhfif (copy_from_user(offp, (const void __user *)(uintptr_t)tr->data.ptr.offsets, tr->offsets_size)) {  //将tr中的中data.ptr.offsets(ipcObjects)对象起始地址拷贝到内核空间}off_end = (void *)off_start + tr->offsets_size;sg_bufp = (u8 *)(PTR_ALIGN(off_end, sizeof(void *)));sg_buf_end = sg_bufp + extra_buffers_size -ALIGN(secctx_sz, sizeof(u64));off_min = 0;for (; offp < off_end; offp++) {//因为data里面并没有binder对象,所以tr->offsets_size = 0, 所以off_end = off_start = offp,所以不会进入这个循环里}//往当前线程type,意味着本次客户端向驱动这次单向通信完成,驱动需要回复给当前线程通信成功tcomplete->type = BINDER_WORK_TRANSACTION_COMPLETE;//往目标进程的type,驱动向目标进程的一次单向通信t->work.type = BINDER_WORK_TRANSACTION;if (reply) {...} else if (!(t->flags & TF_ONE_WAY)) {BUG_ON(t->buffer->async_transaction != 0);binder_inner_proc_lock(proc);//将tcomplete加入当前线程的todo工作队列,通过list_add_tail(&tcomplete->entry, &thread->todo);binder_enqueue_deferred_thread_work_ilocked(thread, tcomplete);t->need_reply = 1;  //需要目的进程回复t->from_parent = thread->transaction_stack;  //当前线程事务栈为null,所以t->from_parent = nullthread->transaction_stack = t;  //将t加入当前线程的事务栈binder_inner_proc_unlock(proc);if (!binder_proc_transaction(t, target_proc, target_thread)) {...}} else {...}if (target_thread)binder_thread_dec_tmpref(target_thread);binder_proc_dec_tmpref(target_proc);if (target_node)binder_dec_node_tmpref(target_node);/** write barrier to synchronize with initialization* of log entry*/smp_wmb();WRITE_ONCE(e->debug_id_done, t_debug_id);return;
...
}

当前是向SM服务获取服务,所以我们并没有携带binder对象过来(data里只有服务的name),这里跳过了binder相关的转化,然后就是构建往目标进程的binder_transaction和往当前线程的binder_work,并加入各自的线程(或进程)的todo队列,最后唤醒目标线程(进程)。

binder_proc_transaction

static bool binder_proc_transaction(struct binder_transaction *t,struct binder_proc *proc,struct binder_thread *thread)
{struct binder_node *node = t->buffer->target_node;struct binder_priority node_prio;bool oneway = !!(t->flags & TF_ONE_WAY);bool pending_async = false;...if (oneway) {...}binder_inner_proc_lock(proc);if (!thread && !pending_async)//从目标进程中获取一个可用的线程thread = binder_select_thread_ilocked(proc);if (thread) {binder_transaction_priority(thread->task, t, node_prio,node->inherit_rt);//如果目标进程中能找到可用的binder线程,就把事务binder_work放入线程的todo队列binder_enqueue_thread_work_ilocked(thread, &t->work);} else if (!pending_async) {//如果目标进程没有可用的进程,就把binder_work放入进程的todo队列binder_enqueue_work_ilocked(&t->work, &proc->todo);} else {//如果没找到可用的进程且是异步任务,就把binder_work放入进程的异步工作队列async_todobinder_enqueue_work_ilocked(&t->work, &node->async_todo);}if (!pending_async)  //如果不是异步任务,就唤醒线程/进程binder_wakeup_thread_ilocked(proc, thread, !oneway /* sync */);binder_inner_proc_unlock(proc);binder_node_unlock(node);return true;
}

这里就是尝试从目标进程找一个可用的线程来处理binder_transaction, 如果找到某个线程可用就将t->work加入线程的todo队列,找不到就加入进程的todo的队列,然后唤醒目标线程(进程)

binder_wakeup_thread_ilocked

tatic void binder_wakeup_thread_ilocked(struct binder_proc *proc,struct binder_thread *thread,bool sync)
{assert_spin_locked(&proc->inner_lock);if (thread) {if (sync)//如果目标进程的某个线程可用,就从它等待队列上唤醒它wake_up_interruptible_sync(&thread->wait);elsewake_up_interruptible(&thread->wait);return;}//如果找不到可用的线程,就唤醒目标进程全部binder线程binder_wakeup_poll_threads_ilocked(proc, sync);
}

这里就是重要的,唤醒目标进程(线程)的地方,从这里会进入目标进程的上下文中(SM进程启动后会阻塞在驱动中等待唤醒)

进入目标进程中

SM进程唤醒位置

static int binder_thread_read(struct binder_proc *proc,struct binder_thread *thread,binder_uintptr_t binder_buffer, size_t size,binder_size_t *consumed, int non_block)
{void __user *buffer = (void __user *)(uintptr_t)binder_buffer;void __user *ptr = buffer + *consumed;void __user *end = buffer + size;int ret = 0;int wait_for_proc_work;if (*consumed == 0) {  //如果read_consumed == 0就返回一个BR_NOOP事件到用户空间if (put_user(BR_NOOP, (uint32_t __user *)ptr))return -EFAULT;ptr += sizeof(uint32_t);}retry:binder_inner_proc_lock(proc);//检测当前线程是否空闲(todo队列为空&事务栈为空)wait_for_proc_work = binder_available_for_proc_work_ilocked(thread);binder_inner_proc_unlock(proc);thread->looper |= BINDER_LOOPER_STATE_WAITING;  //如果当前线程处于空间就设置它的状态为等待if (non_block) {//如果用户进程是通过非阻塞方式访问驱动(open时传入的NONBLOCK)if (!binder_has_work(thread, wait_for_proc_work))ret = -EAGAIN;} else {//阻塞方式ret = binder_wait_for_work(thread, wait_for_proc_work);}//当线程(进程)被唤醒时,要做的事情while (1) {...}
done:...
}

binder_wait_for_work

static int binder_wait_for_work(struct binder_thread *thread,bool do_proc_work)
{DEFINE_WAIT(wait);struct binder_proc *proc = thread->proc;int ret = 0;freezer_do_not_count();binder_inner_proc_lock(proc);for (;;) {将线程加入等待队列,并更改当前线程的状态prepare_to_wait(&thread->wait, &wait, TASK_INTERRUPTIBLE);//退出睡眠的条件,被唤醒也要符合条件才能退出循环,否则再次进入睡眠if (binder_has_work_ilocked(thread, do_proc_work))break;//如果线程工作队列上有活干了,就退出循环,处理todo队列上的事务//如果线程空闲,就将线程加入waiting_threadsif (do_proc_work)list_add(&thread->waiting_thread_node,&proc->waiting_threads);binder_inner_proc_unlock(proc);schedule();  //让出调度,进入睡眠,线程从此等待被唤醒binder_inner_proc_lock(proc);list_del_init(&thread->waiting_thread_node);if (signal_pending(current)) {ret = -ERESTARTSYS;break;}}//将线程等待队列中删除,更改当前线程的状态,将当前线程置于TASK_RUNNING状态finish_wait(&thread->wait, &wait);binder_inner_proc_unlock(proc);freezer_count();return ret;
}

接着上面唤醒后往下执行:

static int binder_thread_read(struct binder_proc *proc,struct binder_thread *thread,binder_uintptr_t binder_buffer, size_t size,binder_size_t *consumed, int non_block)
{....if (non_block) {if (!binder_has_work(thread, wait_for_proc_work))ret = -EAGAIN;} else {ret = binder_wait_for_work(thread, wait_for_proc_work);}//从上面wait状态唤醒,移除线程的wait状态thread->looper &= ~BINDER_LOOPER_STATE_WAITING;//线程唤醒后要做的事情while (1) {uint32_t cmd;struct binder_transaction_data_secctx tr;struct binder_transaction_data *trd = &tr.transaction_data;struct binder_work *w = NULL;struct list_head *list = NULL;struct binder_transaction *t = NULL;struct binder_thread *t_from;size_t trsize = sizeof(*trd);binder_inner_proc_lock(proc);//如果当前线程todo队列不为空,工作队列就从是线程todo,否则就是进程的todoif (!binder_worklist_empty_ilocked(&thread->todo))list = &thread->todo;else if (!binder_worklist_empty_ilocked(&proc->todo) &&wait_for_proc_work)list = &proc->todo;else {binder_inner_proc_unlock(proc);//从工作队列中获取binder_work(出队)w = binder_dequeue_work_head_ilocked(list);if (binder_worklist_empty_ilocked(&thread->todo))thread->process_todo = false;switch (w->type) {case BINDER_WORK_TRANSACTION: {binder_inner_proc_unlock(proc);//根据binder_work构建binder_transactiont = container_of(w, struct binder_transaction, work);} break;...}BUG_ON(t->buffer == NULL);if (t->buffer->target_node) {// //从binder_buffer获取相关参数并构建binder_transaction_datastruct binder_node *target_node = t->buffer->target_node; //SM进程struct binder_priority node_prio;trd->target.ptr = target_node->ptr;  //data指针trd->cookie =  target_node->cookie;  //获取服务被没有携带binder,所以cookie这里为nullnode_prio.sched_policy = target_node->sched_policy;node_prio.prio = target_node->min_priority;binder_transaction_priority(current, t, node_prio,target_node->inherit_rt);cmd = BR_TRANSACTION;  // cmd为BR_TRANSACTION,为驱动向目标进程发起请求} else {...}trd->code = t->code;  //code 为getservice对应的协议码trd->flags = t->flags;  //flagtrd->sender_euid = from_kuid(current_user_ns(), t->sender_euid);t_from = binder_get_txn_from(t);  //获取事务从哪个客户端线程发过来的(客户端请求线程)if (t_from) {struct task_struct *sender = t_from->proc->tsk;trd->sender_pid =task_tgid_nr_ns(sender,task_active_pid_ns(current));  //客户端请求进程} else {trd->sender_pid = 0;}//获取服务时,data里只有想要获取服务的name字段。没有binder对象trd->data_size = t->buffer->data_size;trd->offsets_size = t->buffer->offsets_size;trd->data.ptr.buffer = (binder_uintptr_t)((uintptr_t)t->buffer->data +binder_alloc_get_user_buffer_offset(&proc->alloc));trd->data.ptr.offsets = trd->data.ptr.buffer +ALIGN(t->buffer->data_size,sizeof(void *));//将cmd拷贝到用户空间输入缓冲区()if (put_user(cmd, (uint32_t __user *)ptr)) {...}ptr += sizeof(uint32_t);  //更新内存偏移将tr结构体拷贝到用户空间输入缓冲区()if (copy_to_user(ptr, &tr, trsize)) {...}ptr += trsize;  //更新内存偏移//到这里用户空间输入缓冲区就可以收到一个完整的bwr结构体数据://bwr = cmd + trif (t_from)binder_thread_dec_tmpref(t_from);t->buffer->allow_user_free = 1;if (cmd != BR_REPLY && !(t->flags & TF_ONE_WAY)) {//更新事务链条和当前线程的事务栈binder_inner_proc_lock(thread->proc);t->to_parent = thread->transaction_stack;  //null 因为前面thread并没有事务t->to_thread = thread; //更新事务的目标线程thread->transaction_stack = t;  //更新当前thread的事务栈(压栈)binder_inner_proc_unlock(thread->proc);} else {binder_free_transaction(t);}break;}done:*consumed = ptr - buffer;  //更新已读取的字节数binder_inner_proc_lock(proc);//检测是否需要向用户进程请求新建binder线程池//对于ServiceManager进程,它在启动时就设置只有一个binder线程(主线程)//ps->setThreadPoolMaxThreadCount(0);if (proc->requested_threads == 0 &&list_empty(&thread->proc->waiting_threads) &&proc->requested_threads_started < proc->max_threads &&(thread->looper & (BINDER_LOOPER_STATE_REGISTERED |BINDER_LOOPER_STATE_ENTERED)) /* the user-space code fails to *//*spawn a new thread if we leave this out */) {proc->requested_threads++;binder_inner_proc_unlock(proc);if (put_user(BR_SPAWN_LOOPER, (uint32_t __user *)buffer))return -EFAULT;binder_stat_br(proc, thread, BR_SPAWN_LOOPER);} elsebinder_inner_proc_unlock(proc);return 0;
}

到这里线程(进程)唤醒后会将客户端传过来的binder_transaction解析并写入用户空间输入缓冲区,驱动向SM用户进程发起一个BR_TRANSACTION请求,用户进程从ioctl中返回,处理输入缓冲区数据。

ServiceManager进程用户空间

Servicemanager进程只有一个binder线程,当Servicemanager进程启动后,就通过Looper监听驱动的fd,当驱动从ioctl中返回,就会回调handleEvent()方法

handleEvent

frameworks/native/cmds/servicemanager/main.cpp

    int handleEvent(int /* fd */, int /* events */, void* /* data */) override {IPCThreadState::self()->handlePolledCommands();return 1;  // Continue receiving callbacks.}

IPCThreadState::handlePolledCommands

frameworks/native/libs/binder/IPCThreadState.cpp

status_t IPCThreadState::handlePolledCommands()
{status_t result;do {result = getAndExecuteCommand();  //循环获取驱动写到输入缓冲区的数据} while (mIn.dataPosition() < mIn.dataSize());processPendingDerefs();flushCommands(); //让线程回到内核空间中return result;
}

getAndExecuteCommand

status_t IPCThreadState::getAndExecuteCommand()
{status_t result;int32_t cmd;result = talkWithDriver();if (result >= NO_ERROR) {size_t IN = mIn.dataAvail();if (IN < sizeof(int32_t)) return result;cmd = mIn.readInt32();pthread_mutex_lock(&mProcess->mThreadCountLock);//当前在干活的线程数 + 1mProcess->mExecutingThreadsCount++;//如果当前在执行的线程数大于等于进程最大binder线程数时,//说明当前binder线程池已经耗尽,此时开始记录线程池耗尽的时间if (mProcess->mExecutingThreadsCount >= mProcess->mMaxThreads &&mProcess->mStarvationStartTimeMs == 0) {mProcess->mStarvationStartTimeMs = uptimeMillis();}pthread_mutex_unlock(&mProcess->mThreadCountLock);result = executeCommand(cmd);  //执行对应cmd行为pthread_mutex_lock(&mProcess->mThreadCountLock);//当前线程干完上面的活了,对当前在干活的线程数 - 1mProcess->mExecutingThreadsCount--;//如果现在在干活的线程数小于最大线程数且上次计时还没停//打印线程池耗尽的时间,并重置这个时间if (mProcess->mExecutingThreadsCount < mProcess->mMaxThreads &&mProcess->mStarvationStartTimeMs != 0) {int64_t starvationTimeMs = uptimeMillis() - mProcess->mStarvationStartTimeMs;if (starvationTimeMs > 100) {ALOGE("binder thread pool (%zu threads) starved for %" PRId64 " ms",mProcess->mMaxThreads, starvationTimeMs);}mProcess->mStarvationStartTimeMs = 0;}// Cond broadcast can be expensive, so don't send it every time a binder// call is processed. b/168806193if (mProcess->mWaitingForThreads > 0) {pthread_cond_broadcast(&mProcess->mThreadCountDecrement);  //如果当前有等待线程处理的请求大于1则唤醒block的线程}pthread_mutex_unlock(&mProcess->mThreadCountLock);}return result;
}

重点是executeCommand, 根据驱动的BR_XX执行对应的处理

executeCommand

status_t IPCThreadState::executeCommand(int32_t cmd)
{BBinder* obj;RefBase::weakref_type* refs;status_t result = NO_ERROR;switch ((uint32_t)cmd) {...case BR_TRANSACTION:{binder_transaction_data_secctx tr_secctx;binder_transaction_data& tr = tr_secctx.transaction_data;if (cmd == (int) BR_TRANSACTION_SEC_CTX) {result = mIn.read(&tr_secctx, sizeof(tr_secctx));} else {  //cmd = BR_TRANSACTIONresult = mIn.read(&tr, sizeof(tr));tr_secctx.secctx = 0;}Parcel buffer;const void* origServingStackPointer = mServingStackPointer;mServingStackPointer = __builtin_frame_address(0);const pid_t origPid = mCallingPid;const char* origSid = mCallingSid;const uid_t origUid = mCallingUid;const int32_t origStrictModePolicy = mStrictModePolicy;const int32_t origTransactionBinderFlags = mLastTransactionBinderFlags;const int32_t origWorkSource = mWorkSource;const bool origPropagateWorkSet = mPropagateWorkSource;// Calling work source will be set by Parcel#enforceInterface. Parcel#enforceInterface// is only guaranteed to be called for AIDL-generated stubs so we reset the work source// here to never propagate it.clearCallingWorkSource();clearPropagateWorkSource();mCallingPid = tr.sender_pid;mCallingSid = reinterpret_cast<const char*>(tr_secctx.secctx);mCallingUid = tr.sender_euid;mLastTransactionBinderFlags = tr.flags;Parcel reply;status_t error;if (tr.target.ptr) {  //ptr不为空,说明是带着binder引用和cookie过来的,需要转化成本地binder实体才能调用对应的方法// We only have a weak reference on the target object, so we must first try to// safely acquire a strong reference before doing anything else with it.if (reinterpret_cast<RefBase::weakref_type*>(tr.target.ptr)->attemptIncStrong(this)) {error = reinterpret_cast<BBinder*>(tr.cookie)->transact(tr.code, buffer,&reply, tr.flags);reinterpret_cast<BBinder*>(tr.cookie)->decStrong(this);} else {error = UNKNOWN_TRANSACTION;}} else { //向SM进程获取服务是不带binder对象的,所以走这里,使用SM的BBinder对象的transact(这个对象是SM进程启动时往驱动注册的)error = the_context_object->transact(tr.code, buffer, &reply, tr.flags);}//从transact中返回,向驱动返回BR_REPLYif ((tr.flags & TF_ONE_WAY) == 0) {  //同步请求constexpr uint32_t kForwardReplyFlags = TF_CLEAR_BUF; //请求释放buffersendReply(reply, (tr.flags & kForwardReplyFlags));  //向驱动返回BR_REPLY} else { //异步请求, 不需要reply...}...}break;...return result;
}

BBinder::transact

frameworks/native/libs/binder/Binder.cpp

// NOLINTNEXTLINE(google-default-arguments)
status_t BBinder::transact(uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{data.setDataPosition(0);if (reply != nullptr && (flags & FLAG_CLEAR_BUF)) {reply->markSensitive();}status_t err = NO_ERROR;switch (code) {case PING_TRANSACTION:err = pingBinder();break;case EXTENSION_TRANSACTION:CHECK(reply != nullptr);err = reply->writeStrongBinder(getExtension());break;case DEBUG_PID_TRANSACTION:CHECK(reply != nullptr);err = reply->writeInt32(getDebugPid());break;case SET_RPC_CLIENT_TRANSACTION: {err = setRpcClientDebug(data);break;}default:err = onTransact(code, data, reply, flags);  //code是getService,上面的都不是,要调onTransact()方法,这个方法是子类实现的break;}// In case this is being transacted on in the same process.if (reply != nullptr) {reply->setDataPosition(0);if (reply->dataSize() > LOG_REPLIES_OVER_SIZE) {ALOGW("Large reply transaction of %zu bytes, interface descriptor %s, code %d",reply->dataSize(), String8(getInterfaceDescriptor()).c_str(), code);}}return err;
}

最终还是需要往子类实现的onTransact()方法中处理, 子类就是BnServiceManager

BnServiceManager::onTransact

::android::status_t BnServiceManager::onTransact(uint32_t _aidl_code, const ::android::Parcel& _aidl_data, ::android::Parcel* _aidl_reply, uint32_t _aidl_flags) {::android::status_t _aidl_ret_status = ::android::OK;switch (_aidl_code) {case BnServiceManager::TRANSACTION_getService:{::std::string in_name;::android::sp<::android::IBinder> _aidl_return;//读取跨进程传递过来的service name_aidl_ret_status = _aidl_data.readUtf8FromUtf16(&in_name);//调用BnServiceManager的对应实现类方法getService()::android::binder::Status _aidl_status(getService(in_name, &_aidl_return));//将获取到的service binder写入_aidl_reply_aidl_ret_status = _aidl_reply->writeStrongBinder(_aidl_return);}break;...
}

ServiceManager::getService

Status ServiceManager::getService(const std::string& name, sp<IBinder>* outBinder) {*outBinder = tryGetService(name, true);// returns ok regardless of result for legacy reasonsreturn Status::ok();
}sp<IBinder> ServiceManager::tryGetService(const std::string& name, bool startIfNotFound) {auto ctx = mAccess->getCallingContext();  //校验权限sp<IBinder> out;Service* service = nullptr;if (auto it = mNameToService.find(name); it != mNameToService.end()) {service = &(it->second);out = service->binder;}if (out) {// Setting this guarantee each time we hand out a binder ensures that the client-checking// loop knows about the event even if the client immediately drops the serviceservice->guaranteeClient = true;}return out;
}

从mNameToService中查找name对应的BpBinder对象并返回到上面IPCThreadState::executeCommand中

status_t IPCThreadState::executeCommand(int32_t cmd)
{BBinder* obj;RefBase::weakref_type* refs;status_t result = NO_ERROR;switch ((uint32_t)cmd) {...case BR_TRANSACTION:{if (tr.target.ptr) {  //ptr不为空,说明是带着binder引用和cookie过来的,需要转化成本地binder实体才能调用对应的方法...} else { //向SM进程获取服务是不带binder对象的,所以走这里,使用SM的BBinder对象的transact(这个对象是SM进程启动时往驱动注册的)error = the_context_object->transact(tr.code, buffer, &reply, tr.flags);}//从transact中返回,向驱动返回BC_REPLYif ((tr.flags & TF_ONE_WAY) == 0) {  //同步请求constexpr uint32_t kForwardReplyFlags = TF_CLEAR_BUF; //请求释放buffersendReply(reply, (tr.flags & kForwardReplyFlags));  //向驱动返回BR_REPLY} else { //异步请求, 不需要reply...}...}break;...return result;
}

transact中返回,调用sendReply向驱动返回获取到的BpBinder对象

IPCThreadState::sendReply

status_t IPCThreadState::sendReply(const Parcel& reply, uint32_t flags)
{status_t err;status_t statusBuffer;//写入数据到输出缓冲区err = writeTransactionData(BC_REPLY, flags, -1, 0, reply, &statusBuffer);//向驱动写入return waitForResponse(nullptr, nullptr);
}
  1. writeTransactionData中将ipc相关数据写入mOut输出缓冲区, 写入格式bwr = cmd + tr
  2. 调用waitForResponse向驱动写入

writeTransactionData

status_t IPCThreadState::writeTransactionData(int32_t cmd, uint32_t binderFlags,int32_t handle, uint32_t code, const Parcel& data, status_t* statusBuffer)
{binder_transaction_data tr;tr.target.ptr = 0; /* Don't pass uninitialized stack data to a remote process */tr.target.handle = handle;  //handle = -1tr.code = code;  //code = 0tr.flags = binderFlags;tr.cookie = 0;tr.sender_pid = 0;tr.sender_euid = 0;const status_t err = data.errorCheck();if (err == NO_ERROR) {tr.data_size = data.ipcDataSize();  //ipc数据大小tr.data.ptr.buffer = data.ipcData();  //ipc数据tr.offsets_size = data.ipcObjectsCount()*sizeof(binder_size_t);  //binder对象地址范围tr.data.ptr.offsets = data.ipcObjects();  //binder对象偏移指针} else if (statusBuffer) {...} else {return (mLastError = err);}mOut.writeInt32(cmd);  //cmd BC_REPLYmOut.write(&tr, sizeof(tr)); //tr结构体return NO_ERROR;
}

IPCThreadState::waitForResponse

status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult)
{uint32_t cmd;int32_t err;while (1) {//向读取读写if ((err=talkWithDriver()) < NO_ERROR) break; cmd = (uint32_t)mIn.readInt32();switch (cmd) {...case BR_REPLY:{binder_transaction_data tr;err = mIn.read(&tr, sizeof(tr));ALOG_ASSERT(err == NO_ERROR, "Not enough command data for brREPLY");if (err != NO_ERROR) goto finish;if (reply) {  //reply = null...} else {freeBuffer(nullptr,reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),tr.data_size,reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),tr.offsets_size/sizeof(binder_size_t));continue;}}goto finish;default:err = executeCommand(cmd);if (err != NO_ERROR) goto finish;break;}}
...return err;
}

通过talkWithDriver方法中ioctl和驱动交互,将数据传给驱动

再次进入内核空间

进入驱动中处理,此时仍处于SM进程上下文中

binder_ioctl

static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
{int ret;struct binder_proc *proc = filp->private_data;struct binder_thread *thread;unsigned int size = _IOC_SIZE(cmd);void __user *ubuf = (void __user *)arg;thread = binder_get_thread(proc);  //获取当前线程switch (cmd) {case BINDER_WRITE_READ:ret = binder_ioctl_write_read(filp, cmd, arg, thread);if (ret)goto err;break;...return ret;
}static int binder_ioctl_write_read(struct file *filp,unsigned int cmd, unsigned long arg,struct binder_thread *thread)
{int ret = 0;struct binder_proc *proc = filp->private_data;unsigned int size = _IOC_SIZE(cmd);void __user *ubuf = (void __user *)arg;struct binder_write_read bwr;if (copy_from_user(&bwr, ubuf, sizeof(bwr))) {ret = -EFAULT;goto out;}if (bwr.write_size > 0) {ret = binder_thread_write(proc, thread,bwr.write_buffer,bwr.write_size,&bwr.write_consumed);trace_binder_write_done(ret);if (ret < 0) {bwr.read_consumed = 0;if (copy_to_user(ubuf, &bwr, sizeof(bwr)))ret = -EFAULT;goto out;}}if (bwr.read_size > 0) {...}...
}static int binder_thread_write(struct binder_proc *proc,struct binder_thread *thread,binder_uintptr_t binder_buffer, size_t size,binder_size_t *consumed)
{uint32_t cmd;struct binder_context *context = proc->context;void __user *buffer = (void __user *)(uintptr_t)binder_buffer;void __user *ptr = buffer + *consumed;void __user *end = buffer + size;while (ptr < end && thread->return_error.cmd == BR_OK) {int ret;if (get_user(cmd, (uint32_t __user *)ptr))  //将cmd从用户空间输出缓冲区拷贝到内核空间return -EFAULT;ptr += sizeof(uint32_t);  //内存指针偏移一个cmd大小switch (cmd) {...case BC_TRANSACTION:case BC_REPLY: {  //这次是BC_REPLYstruct binder_transaction_data tr;if (copy_from_user(&tr, ptr, sizeof(tr))) //拷贝tr结构体到内核空间return -EFAULT;ptr += sizeof(tr);  //内存指针偏移一个tr结构体大小binder_transaction(proc, thread, &tr,cmd == BC_REPLY, 0);  //核心方法,处理tr结构体中的数据,并投递到请求客户端进程 cmd -> BC_REPLY = truebreak;}...}*consumed = ptr - buffer;  //更新已写入的字节数,在这里已经是全部写完了}return 0;
}

从SM进程中拷贝输出缓冲区内容到内核空间,并调用核心方法binder_transaction处理

binder_transaction

static void binder_transaction(struct binder_proc *proc,struct binder_thread *thread,struct binder_transaction_data *tr, int reply,binder_size_t extra_buffers_size)
{...if (reply) {  //这次是BC_REPLYbinder_inner_proc_lock(proc);in_reply_to = thread->transaction_stack;  //前面赋值为BR_TRANSATION时对应的那个事务,thread->transaction_stack = in_reply_to->to_parent;  //为nullbinder_inner_proc_unlock(proc);target_thread = binder_get_txn_from_and_acq_inner(in_reply_to);  //获取服务请求端请求线程target_proc = target_thread->proc;  //请求端进程binder_procatomic_inc(&target_proc->tmp_ref);binder_inner_proc_unlock(target_thread->proc);} else {...}/* TODO: reuse incoming transaction for reply */t = kzalloc(sizeof(*t), GFP_KERNEL);  //往请求端进程的binder_transactionbinder_stats_created(BINDER_STAT_TRANSACTION);spin_lock_init(&t->lock);tcomplete = kzalloc(sizeof(*tcomplete), GFP_KERNEL);  //往当前SM进程回复complete的binder_workbinder_stats_created(BINDER_STAT_TRANSACTION_COMPLETE);t->debug_id = t_debug_id;if (!reply && !(tr->flags & TF_ONE_WAY))t->from = thread;elset->from = NULL;  //不记录服务端进程的binder线程(没必要,客户端不需要再reply到服务端进程)t->sender_euid = task_euid(proc->tsk);t->to_proc = target_proc;  //客户端进程t->to_thread = target_thread;  //客户端请求线程t->code = tr->code;  //code = 0t->flags = tr->flags;t->buffer = binder_alloc_new_buf(&target_proc->alloc, tr->data_size,tr->offsets_size, extra_buffers_size,!reply && (t->flags & TF_ONE_WAY)); //从红黑树获取一个空的binder_buffert->buffer->debug_id = t->debug_id;t->buffer->transaction = t;  //装入tt->buffer->target_node = target_node;  //reply中的target_node = nulloff_start = (binder_size_t *)(t->buffer->data +ALIGN(tr->data_size, sizeof(void *)));offp = off_start;//从用户空间tr结构体中拷贝ipc data到内核空间if (copy_from_user(t->buffer->data, (const void __user *)(uintptr_t)tr->data.ptr.buffer, tr->data_size)) {}//从用户空间tr结构体中拷贝对象偏移地址指针到内核空间if (copy_from_user(offp, (const void __user *)(uintptr_t)tr->data.ptr.offsets, tr->offsets_size)) {}off_end = (void *)off_start + tr->offsets_size;sg_bufp = (u8 *)(PTR_ALIGN(off_end, sizeof(void *)));sg_buf_end = sg_bufp + extra_buffers_size -ALIGN(secctx_sz, sizeof(u64));off_min = 0;//reply中带有binder代理对象,需要进行一次循环for (; offp < off_end; offp++) {struct binder_object_header *hdr;size_t object_size = binder_validate_object(t->buffer, *offp);hdr = (struct binder_object_header *)(t->buffer->data + *offp);off_min = *offp + object_size;switch (hdr->type) {//SM进程reply了一个binder代理对象,但是我们要跨进程到请求进程中,binder type要转化为handle type//通过SM查询到的binder代理对象,使用binder代理指针在当前进程的proc->nodes.rb_node中查询对应的binder_node//从这个binder_node中获取binder_ref, 并增加这个binder_ref的强引用//把fp.hdr.type改为BINDER_TYPE_HANDLE//设置fp->binder = 0 和 fp->cookie = 0//fp->handle = rdata.desc 这个是给用户空间使用的handle值,指向这个binder_ref (handle值也是在进程范围内起作用的)case BINDER_TYPE_BINDER:case BINDER_TYPE_WEAK_BINDER: {struct flat_binder_object *fp;fp = to_flat_binder_object(hdr);ret = binder_translate_binder(fp, t, thread);if (ret < 0) {return_error = BR_FAILED_REPLY;return_error_param = ret;return_error_line = __LINE__;goto err_translate_failed;}} break;case BINDER_TYPE_HANDLE:case BINDER_TYPE_WEAK_HANDLE: {struct flat_binder_object *fp;fp = to_flat_binder_object(hdr);ret = binder_translate_handle(fp, t, thread);if (ret < 0) {return_error = BR_FAILED_REPLY;return_error_param = ret;return_error_line = __LINE__;goto err_translate_failed;}} break;...}}tcomplete->type = BINDER_WORK_TRANSACTION_COMPLETE;t->work.type = BINDER_WORK_TRANSACTION;if (reply) {//往当前SM进程todo队列写入tcompletebinder_enqueue_thread_work(thread, tcomplete);binder_inner_proc_lock(target_proc);//将目标线程的事务栈更新为当前服务端线程的事务栈,完成事务栈链条binder_pop_transaction_ilocked(target_thread, in_reply_to);//往客户端请求线程todo队列放入&t->workbinder_enqueue_thread_work_ilocked(target_thread, &t->work);binder_inner_proc_unlock(target_proc);//唤醒请求线程wake_up_interruptible_sync(&target_thread->wait);//恢复SM当前线程的优先级binder_restore_priority(current, in_reply_to->saved_priority);//释放本次binder_transaction占用的buffer内存binder_free_transaction(in_reply_to);} else if (!(t->flags & TF_ONE_WAY)) {...} else {...}if (target_thread)binder_thread_dec_tmpref(target_thread);binder_proc_dec_tmpref(target_proc);if (target_node)binder_dec_node_tmpref(target_node);/** write barrier to synchronize with initialization* of log entry*/smp_wmb();WRITE_ONCE(e->debug_id_done, t_debug_id);return;
...
}

唤醒客户端进程中的请求线程,进入客户端进程上下文

和前面分析的SM进程唤醒是相同的,客户端求线程唤醒也是从binder_wait_for_work中继续往下执行

static int binder_thread_read(struct binder_proc *proc,struct binder_thread *thread,binder_uintptr_t binder_buffer, size_t size,binder_size_t *consumed, int non_block)
{void __user *buffer = (void __user *)(uintptr_t)binder_buffer;void __user *ptr = buffer + *consumed;void __user *end = buffer + size;...int ret = 0;int wait_for_proc_work;if (non_block) {if (!binder_has_work(thread, wait_for_proc_work))ret = -EAGAIN;} else {ret = binder_wait_for_work(thread, wait_for_proc_work);}//线程唤醒后往下执行thread->looper &= ~BINDER_LOOPER_STATE_WAITING;  //线程移除wait状态while (1) {uint32_t cmd;struct binder_transaction_data_secctx tr;struct binder_transaction_data *trd = &tr.transaction_data;struct binder_work *w = NULL;struct list_head *list = NULL;struct binder_transaction *t = NULL;struct binder_thread *t_from;size_t trsize = sizeof(*trd);binder_inner_proc_lock(proc);//获取todo队列if (!binder_worklist_empty_ilocked(&thread->todo))list = &thread->todo;else if (!binder_worklist_empty_ilocked(&proc->todo) &&wait_for_proc_work)list = &proc->todo;else {binder_inner_proc_unlock(proc);//从todo队列获取binder_work(出队)w = binder_dequeue_work_head_ilocked(list);if (binder_worklist_empty_ilocked(&thread->todo))thread->process_todo = false;  //出队后如果队列为空,就设置线程为当前没有处理事务需要处理switch (w->type) {  //case BINDER_WORK_TRANSACTION: {binder_inner_proc_unlock(proc);t = container_of(w, struct binder_transaction, work);  //从binder_work构建binder_transaction} break;...}BUG_ON(t->buffer == NULL);if (t->buffer->target_node) {  //reply的target_node为null...} else {trd->target.ptr = 0;  //reply的是handle,不是binder,所以ptr,cookie都是0trd->cookie = 0;cmd = BR_REPLY;  //驱动向用户进程的返回 BR_REPLY}trd->code = t->code;trd->flags = t->flags;trd->sender_euid = from_kuid(current_user_ns(), t->sender_euid);t_from = binder_get_txn_from(t);  //获取服务端处理线程,为null,没有赋值(不需要知道)if (t_from) {...} else {trd->sender_pid = 0;}//从t中获取ipc data 以及对象偏移等并构造binder_transation_data结构体trd->data_size = t->buffer->data_size;trd->offsets_size = t->buffer->offsets_size;trd->data.ptr.buffer = (binder_uintptr_t)((uintptr_t)t->buffer->data +binder_alloc_get_user_buffer_offset(&proc->alloc));trd->data.ptr.offsets = trd->data.ptr.buffer +ALIGN(t->buffer->data_size,sizeof(void *));tr.secctx = t->security_ctx;  //null//向客户端进程用户空间输入缓冲区写入cmd大小if (put_user(cmd, (uint32_t __user *)ptr)) {}ptr += sizeof(uint32_t);  //内存指针向后偏移一个cmd大小//向客户端进程用户空间输入缓冲区写入trif (copy_to_user(ptr, &tr, trsize)) {}ptr += trsize;  //内存指针偏移tr大小t->buffer->allow_user_free = 1;  //允许SM服务进程释放事务bufferif (cmd != BR_REPLY && !(t->flags & TF_ONE_WAY)) {...} else {binder_free_transaction(t);  //释放binder_buffer}break;}done://判断是否需要向用户空间新建binder线程处理事务*consumed = ptr - buffer;binder_inner_proc_lock(proc);if (proc->requested_threads == 0 &&list_empty(&thread->proc->waiting_threads) &&proc->requested_threads_started < proc->max_threads &&(thread->looper & (BINDER_LOOPER_STATE_REGISTERED |BINDER_LOOPER_STATE_ENTERED)) /* the user-space code fails to *//*spawn a new thread if we leave this out */) {proc->requested_threads++;binder_inner_proc_unlock(proc);binder_debug(BINDER_DEBUG_THREADS,"%d:%d BR_SPAWN_LOOPER\n",proc->pid, thread->pid);if (put_user(BR_SPAWN_LOOPER, (uint32_t __user *)buffer))return -EFAULT;binder_stat_br(proc, thread, BR_SPAWN_LOOPER);} elsebinder_inner_proc_unlock(proc);return 0;
}

到这里驱动在内核空间已经执行完成,将返回用户空间中(ioctl中返回)

回到客户端进程用户空间

然后就是从ioclt->talkWithDriver->waitForResponse一路返回

status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult)
{uint32_t cmd;int32_t err;while (1) {if ((err=talkWithDriver()) < NO_ERROR) break;cmd = (uint32_t)mIn.readInt32();switch (cmd) {...case BR_REPLY:{binder_transaction_data tr;err = mIn.read(&tr, sizeof(tr));ALOG_ASSERT(err == NO_ERROR, "Not enough command data for brREPLY");if (err != NO_ERROR) goto finish;if (reply) {if ((tr.flags & TF_STATUS_CODE) == 0) {//将tr中的内容赋值给replyreply->ipcSetDataReference(reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),tr.data_size,reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),tr.offsets_size/sizeof(binder_size_t),freeBuffer);} else {...}} else {...}}goto finish;default:err = executeCommand(cmd);if (err != NO_ERROR) goto finish;break;}}finish:...return err;  //最终从这里一路返回调用的开始
}

BR_REPLY的case中将tr数据写入reply对象中,然后return,一路回到BpServiceManager::getService

ipcSetDataReference

oid Parcel::ipcSetDataReference(const uint8_t* data, size_t dataSize,const binder_size_t* objects, size_t objectsCount, release_func relFunc)
{// this code uses 'mOwner == nullptr' to understand whether it owns memoryLOG_ALWAYS_FATAL_IF(relFunc == nullptr, "must provide cleanup function");freeData();//赋值给对象的成员变量mData = const_cast<uint8_t*>(data);mDataSize = mDataCapacity = dataSize;mObjects = const_cast<binder_size_t*>(objects);mObjectsSize = mObjectsCapacity = objectsCount;mOwner = relFunc;binder_size_t minOffset = 0;for (size_t i = 0; i < mObjectsSize; i++) {binder_size_t offset = mObjects[i];const flat_binder_object* flat= reinterpret_cast<const flat_binder_object*>(mData + offset);uint32_t type = flat->hdr.type;if (!(type == BINDER_TYPE_BINDER || type == BINDER_TYPE_HANDLE ||type == BINDER_TYPE_FD)) {android_errorWriteLog(0x534e4554, "135930648");android_errorWriteLog(0x534e4554, "203847542");ALOGE("%s: unsupported type object (%" PRIu32 ") at offset %" PRIu64 "\n",__func__, type, (uint64_t)offset);// WARNING: callers of ipcSetDataReference need to make sure they// don't rely on mObjectsSize in their release_func.mObjectsSize = 0;break;}minOffset = offset + sizeof(flat_binder_object);}scanForFds();
}

回到BpServiceManager::getService

BpServiceManager::getService

::android::binder::Status BpServiceManager::getService(const ::std::string& name, ::android::sp<::android::IBinder>* _aidl_return) {::android::Parcel _aidl_data;_aidl_data.markForBinder(remoteStrong());::android::Parcel _aidl_reply;::android::status_t _aidl_ret_status = ::android::OK;::android::binder::Status _aidl_status;_aidl_ret_status = _aidl_data.writeInterfaceToken(getInterfaceDescriptor());_aidl_ret_status = _aidl_data.writeUtf8AsUtf16(name);_aidl_ret_status = remote()->transact(BnServiceManager::TRANSACTION_getService, _aidl_data, &_aidl_reply, 0);  //发起请求跨进程获取服务行为_aidl_ret_status = _aidl_status.readFromParcel(_aidl_reply);_aidl_ret_status = _aidl_reply.readNullableStrongBinder(_aidl_return);  //获取成功后,从_aidl_reply中读取SM进程返回的数据return _aidl_status;
}

Parcel::readNullableStrongBinder

frameworks/native/libs/binder/Parcel.cpp

status_t Parcel::readNullableStrongBinder(sp<IBinder>* val) const
{return unflattenBinder(val);
}status_t Parcel::unflattenBinder(sp<IBinder>* out) const
{...const flat_binder_object* flat = readObject(false);  //从mData获取flat_binder_object格式数据对象if (flat) {switch (flat->hdr.type) {case BINDER_TYPE_BINDER: {sp<IBinder> binder =sp<IBinder>::fromExisting(reinterpret_cast<IBinder*>(flat->cookie));return finishUnflattenBinder(binder, out);}case BINDER_TYPE_HANDLE: {   //SM进程发给客户端进程的是服务对应的handlesp<IBinder> binder =ProcessState::self()->getStrongProxyForHandle(flat->handle); //从handle转化为BpBinderreturn finishUnflattenBinder(binder, out);}}}return BAD_TYPE;
}

ProcessState::getStrongProxyForHandle

sp<IBinder> ProcessState::getStrongProxyForHandle(int32_t handle)
{sp<IBinder> result;AutoMutex _l(mLock);handle_entry* e = lookupHandleLocked(handle);  //从当前线程的mHandleToObject对象表中查找handle对应的BpBinder对象if (e != nullptr) {IBinder* b = e->binder;  //因为当前进程还没有这个服务的binder代理对象,上面找不到所以这里为nullif (b == nullptr || !e->refs->attemptIncWeak(this)) {if (handle == 0) {  //ServiceManager特殊情况...}sp<BpBinder> b = BpBinder::PrivateAccessor::create(handle);  //通过handle构造服务在这个进程的binder代理对象e->binder = b.get();if (b) e->refs = b->getWeakRefs();  //将这个代理对象的引用赋给handle_entry的refsresult = b;} else {...}}return result;
}

ProcessState::lookupHandleLocked

ProcessState::handle_entry* ProcessState::lookupHandleLocked(int32_t handle)
{//从当前线程的mHandleToObject对象表中查找handle对应的BpBinder对象,const size_t N=mHandleToObject.size();if (N <= (size_t)handle) {//如果不需要就创建一个handle_entryhandle_entry e;e.binder = nullptr; //初始化handle_entry->binder = nullptre.refs = nullptr;  //对象的弱引用计数//插入mHandleToObjectstatus_t err = mHandleToObject.insertAt(e, N, handle+1-N);if (err < NO_ERROR) return nullptr;}return &mHandleToObject.editItemAt(handle);
}

然后这个BpBinder对象会返回到客户端调用getService的地方,也就是最开始的SurfaceFlinger的调用中,获取服务的操作到这里就完成了。

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.hqwc.cn/news/22482.html

如若内容造成侵权/违法违规/事实不符,请联系编程知识网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

使用Pycharm

本人没有单独安装python&#xff0c;而是直接安装了anaconda 使用Pycharm创建项目 项目取名为HelloWorld&#xff0c;环境使用前面安装的anaconda pycharm安装模块的方法&#xff1a; 打开Pycharm>File > Settings>Project: Python>Project Interpreter

【大数据之Hive】二十三、HQL语法优化之数据倾斜

1 数据倾斜概述 数据倾斜指参与计算的数据分布不均&#xff0c;即某个key或者某些key的数据量远超其他key&#xff0c;导致在shuffle阶段&#xff0c;大量相同key的数据被发往同一个Reduce&#xff0c;导致该Reduce所需的时间远超其他Reduce&#xff0c;成为整个任务的瓶颈。  …

安装部署rancher2.7.0,然后导入K8S集群,管理集群

1. 安装rancher2.7.0 docker run -d --name rancher --restartunless-stopped --privileged -p 80:80 -p 443:443 -v /var/lib/rancher:/var/lib/rancher/ -v /var/log/rancher/auditlog:/var/log/auditlog rancher/rancher:v2.7.02.浏览器登录 2.1 利用默认账号登…

SOPC之NiosⅡ系统(三)

常用NIOS Ⅱ组件概括 目录 1.定时器Timer 1.1 预定义硬件配置 1.2 超时周期Timeout poriod 1.3 计数器大小Timer counter Size 1.4 寄存器Registers 2.串口UART 2.1 基础设置Basic settings 2.1.1 奇偶校验Parity 2.1.2 数据为Data bits 2.1.3 停止位Stop bits 2.1…

CANOE 操作详情

canoe 手把手教你如何操作canoe工具&#xff1a; 1&#xff1a;创建test文件夹&#xff0c; 并在文件夹下创建3 文件夹 2&#xff1a;创建canoe工程&#xff0c;选择500波特率 3: 双击后进入 4&#xff1a;创建DBC文件 5 创建网络节点&#xff1a; 两个节点创建好之后&#x…

【力扣刷题 | 第十九天】

目录 前言&#xff1a; 135. 分发糖果 - 力扣&#xff08;LeetCode&#xff09; 860. 柠檬水找零 - 力扣&#xff08;LeetCode&#xff09; 总结&#xff1a; 前言&#xff1a; 今天着重刷贪心算法的题目 135. 分发糖果 - 力扣&#xff08;LeetCode&#xff09; n 个孩子…

SQL语法与数据库快速入门(1)

目录 数据库简介数据库分类常用数据库简介使用场景MySql 的安装与配置数据库客户端工具MySql 介绍SQL 简介DDL 数据库操作-创建DDL 数据库操作-查看DDL 数据库操作-修改DDL 数据库操作-删除DDL 数据库表操作简介DDL 数据库表操作-创建DDL 数据库表操作-查看DDL 数据库表操作-修…

explain 是干嘛的

explain 是干嘛的 1.explain的作用 在MySQL中&#xff0c;EXPLAIN是一个用于查询优化的关键字。它可以用于分析查询语句的执行计划&#xff0c;帮助开发人员和数据库管理员理解查询的执行方式、查询涉及的表和索引、连接类型、查询优化器的决策等信息。 通过使用EXPLAIN关键…

【基于FPGA的芯片设计】RISC-V的20条指令CPU设计

实验板卡&#xff1a;xc7a100tlc sg324-2L&#xff0c;共20个开关 实验要求&#xff1a;

psutil库使用详解

一、背景 在Python的世界里&#xff0c;有一些库因其强大的功能和易用性而备受开发者们的喜爱。今天&#xff0c;我们要介绍的就是其中的一员——psutil库。psutil(python system and process utilities)是一个跨平台的第三方库&#xff0c;用于获取系统运行时的进程和系统利用…

【数据结构】搜索二叉树/map/set

二叉搜索树&#xff08;搜索二叉树&#xff09; 1.1.二叉搜索树概念 二叉搜索树又称二叉排序树&#xff0c;它或者是一棵空树&#xff0c;或者是具有以下性质的二叉树: 若它的左子树不为空&#xff0c;则左子树上所有节点的值都小于根节点的值 若它的右子树不为空&#xff0c;则…

决策树(Decision Tree)

文章目录 一、决策树 一、决策树 决策树在机器学习中也是比较常见的一种算法&#xff0c;属于监督学习中的一种。看字面意思应该也比较容易理解&#xff0c;相比其他算法比如支持向量机(SVM)或神经网络&#xff0c;似乎决策树感觉“亲切”许多。 优点&#xff1a;计算复杂度不…