Data envelopment analysis (DEA) is a well-known instrument for determining the efficiencies of decision-making units (DMUs). In general, data sets are growing larger and larger, and handling related issues is topic of big data. Similar trends and tendencies are evident in DEA applications. In this context, Zhu et al. (2018) develop an algorithm – which is mainly based on decomposing the set of DMUs – to accelerate the search for strong efficient DMUs. Then, such strong efficient DMUs are stored in a bucket to facilitate further useful analyses. However, in this talk, we show that their stopping rule does not fully guarantee that the respective container only consists of strong efficient DMUs; in the end, it might also include inefficient DMUs. Therefore, we extend their algorithm to avoid such flaw, replenishing a final merging of the subsets of efficient DMUs and an additional evaluation step. Ultimately, from a computational point of view, we study the impact of different rules for initiating the final stage.