Law enforcement and police service are, related to the proposed AI Act of the European Commission, part of the high-risk area of artificial intelligence (AI). As such, in the area of digital government and high-risk AI systems exists a particular responsibility for ensuring ethical and social aspects with AI usage. The AI Act also imposes explainability requirements on AI, which could be met by the usage of explainable AI (XAI). The literature has not yet addressed the characteristics of the high-risk area law enforcement and police service in relation to compliance with explainability requirements. We conducted 11 expert interviews and used the grounded theory method to develop a grounded model of the phenomenon AI explainability requirements compliance in the context of law enforcement and police service. We discuss how the model and the results can be useful to authorities, governments, practitioners and researchers alike.