From patchwork Wed May 8 06:42:21 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anton Mikanovich X-Patchwork-Id: 3531 Return-Path: Received: from shymkent.ilbers.de ([unix socket]) by shymkent (Cyrus 2.5.10-Debian-2.5.10-3+deb9u2) with LMTPA; Wed, 08 May 2024 08:42:43 +0200 X-Sieve: CMU Sieve 2.4 Received: from mail-lf1-f61.google.com (mail-lf1-f61.google.com [209.85.167.61]) by shymkent.ilbers.de (8.15.2/8.15.2/Debian-8+deb9u1) with ESMTPS id 4486gfTG030759 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT) for ; Wed, 8 May 2024 08:42:42 +0200 Received: by mail-lf1-f61.google.com with SMTP id 2adb3069b0e04-51fb0ba9f6asf3074250e87.0 for ; Tue, 07 May 2024 23:42:42 -0700 (PDT) ARC-Seal: i=2; a=rsa-sha256; t=1715150556; cv=pass; d=google.com; s=arc-20160816; b=be/AjEtdwDn3x1kVS5oYWW/wTaAklRizoiDia1veXFErx28pY6hRQUYXlQzSevd6Lu FxkpaWhuEBMGkLwAwdQcMq4XuThvER7FGLnWyWKQ+ykM/qnQ8Vdi/yZJckAZhuX43Iwy hXlns9xl+MpkCU8vWu5I2CHGGEcOVpfgUwO6A2PYEiEA/9fFKOd4gIMbaCMTKkIksmtZ HOPSGXWc2vA2+gU/+GHeoPsBy5smJ6hDXk1DdRUrV4g4RXYTAEP990OxGYPSe4W3L0Ef Aw0LaHygGt+2/lcZlWK/kbjyjgi3QvO3lQ5prOC+VnmiVLwmlWQrI+FsNvKm+gNnHT5z JwXA== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-unsubscribe:list-subscribe:list-archive:list-help:list-post :list-id:mailing-list:precedence:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:sender:dkim-signature; bh=9mDryEcqdicV/vpsOi3a3rwdRQalclfq4Tl4JD4iJ8U=; fh=r/Kkiyno7HpmXbfWR+7y/U7xzxO8g5/LLBafkYg/8rg=; b=Q2fg0gcUhxFW25SiX1Wlw7AR6KhlkTq5sfAkyEnl35izof/nCsieUC90L/37KNTAot 0xdHNtIklI+BOrt9DAYKQxU6nrv32l+xCOhHHBiIJC39I6RLMNHLnfo639VxvM+xudDX yQdKabigzXQmQ5Q9tA/65uvPvVae0ZA629+paWsevtXFmO6kZAp4cfUYbxylt5OZ3FP+ PrcGYCBHTHaKfSFnn2GC45jY/WptbYUsRIXvjj96MUp/LFck7HDj+dmrUPqSBDhcNsQG sPgapdausdGAPFuin2PBie8A9vmwQ3zDfI83pWKWIchlnBrTpeci8/CMLkZMoVmb6L5l IyHg==; darn=isar-build.org ARC-Authentication-Results: i=2; gmr-mx.google.com; spf=pass (google.com: domain of amikan@ilbers.de designates 85.214.156.166 as permitted sender) smtp.mailfrom=amikan@ilbers.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=googlegroups.com; s=20230601; t=1715150556; x=1715755356; darn=isar-build.org; h=list-unsubscribe:list-subscribe:list-archive:list-help:list-post :list-id:mailing-list:precedence:x-original-authentication-results :x-original-sender:mime-version:references:in-reply-to:message-id :date:subject:cc:to:from:sender:from:to:cc:subject:date:message-id :reply-to; bh=9mDryEcqdicV/vpsOi3a3rwdRQalclfq4Tl4JD4iJ8U=; b=G+wuXd+USvWtCKewK8k+RowG3GYNP3LLYntfC17vU7bKt77T7rJDyJtemOwMsAeu45 W/12DA1mXOlUoayphE2OTe+NZYwDMYT4YTa7NNToqT3Rl+vl02LTVH1eoIsbQ3Amsa25 VYrk4xDslLAC8ZlMH7VR4RRd5IPkm2dDHbYYCzSBOF1xB/mVGD/bSYv7ZZ1wGcJdyAUz 9KshZbMq4HEV6of/itTnZtJ385QlaBGA7CHKoAh5MKiAoYI8WarxqKXX31fVo6yQRJLh jhPTPc3OYI1ouaz5ft4VCNFU9r01NlYFgm0s/Qr1/WOZBZlLe1/W6AatdXt5io0VGRMY YjrA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1715150556; x=1715755356; h=list-unsubscribe:list-subscribe:list-archive:list-help:list-post :x-spam-checked-in-group:list-id:mailing-list:precedence :x-original-authentication-results:x-original-sender:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :x-beenthere:x-gm-message-state:sender:from:to:cc:subject:date :message-id:reply-to; bh=9mDryEcqdicV/vpsOi3a3rwdRQalclfq4Tl4JD4iJ8U=; b=urhZ1VispyUiTD3NpVyP8EeG4of2Air/Deq74pHrqAPVWq2yUO4ANIMqW6Qr1aYKOl ITyKT4R9oPA96D/miql75jjFAtTTphLGERiIDQK8r8LVpeHN9yO2V1KiYn0wanVQkr6w DD/9DtbJug38DfA+P3u1T8yywag5hBjjmXdkiapev2yst7CSvk2K1wx71A4hSA3+dquD O4BL9FslnfsiGKACu3Tfj0sP0ScazhjkwctNgBgZEv84SQqIb/cX5nLxsTbTZIqKfaEY qaMt+LuAYEly7J1lzXimKFg8wEv8xx8Qa8Pl9LSyrulv+/i5ix0UklOxYkDKvcElfjqq Y5yA== Sender: isar-users@googlegroups.com X-Forwarded-Encrypted: i=2; AJvYcCVEggsitRWy6VvllTEx+GP+Si3zp+tPAbU4gbMamhRRbKUD0SSdhVtsJ1oROmphS54299MZAGhYERGIQc/FxXDLM5YVX80= X-Gm-Message-State: AOJu0YwYJjxEoaF9JAZNwR22dHJTdBYJlQ6w+6aCvI66sqzsGiKddFjn HIp16wARiGhG75CWK8A2pcA7fH77EBVpYnws6uvDHd6nDYCuKLk3 X-Google-Smtp-Source: AGHT+IGQJLLo/Lmu9vvQFZTKUI2uxcAuCsN9FiNgTbG4RA5YuthoU7Zr20vlFkSvM7jAPwtoKPe9CQ== X-Received: by 2002:a19:770e:0:b0:520:9df8:f245 with SMTP id 2adb3069b0e04-5217c278b0bmr989882e87.1.1715150556192; Tue, 07 May 2024 23:42:36 -0700 (PDT) X-BeenThere: isar-users@googlegroups.com Received: by 2002:a05:6512:280b:b0:519:6fe8:c02e with SMTP id 2adb3069b0e04-51f01969d88ls1117782e87.0.-pod-prod-03-eu; Tue, 07 May 2024 23:42:34 -0700 (PDT) X-Received: by 2002:a05:6512:344c:b0:519:2828:c284 with SMTP id 2adb3069b0e04-5217ce42f3cmr886912e87.65.1715150553930; Tue, 07 May 2024 23:42:33 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1715150553; cv=none; d=google.com; s=arc-20160816; b=SuCZtpeU/ujxB4wyrVf6hB5Ju4mMe+/DZPGJgOBm26Dg6cab5JHPAzwOeV8AcG3Eh8 CUHzqmWxSLiZhZ8qjTJU3WwMl0jfzHm2Fcjfqu8UcKhM2tdguj8Vrj5KmAvMvhBFbiI0 y/pKkTRLcXcYwb9IUM2GRegoih1Vy3EHZLng64pfPLcdNUe/uDfHwJ3oHE3813m19TXe mUgFfhh08wS7WmJ99PbUg38kJhG9cBf2dvoTHGwdimKJz3q+AWxZXhbkn8Vd17olE56T +LV7VilonVwC7CIrC4zqJcAreutRW3sC3fvn91UruJqknfeR2OB/6n78XvK5tkd5BB+0 n4Ug== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from; bh=o/W5BsKf7NQvxE1gT12Ymz0GKL6r182H1fVuCxc0+eE=; fh=/h9QQkzJ8EboVkWg45aWwpaUro6WMavIVd2OhN45RtE=; b=v7H6zIFRKY+hoFD/3cog5izeRCOpMH6K+KAljPJFoSh8mX0Jh3zQbLU078Nv54ZVlK XKcq8peg0N5Inn/Tn4RYYOiZmqd4CKO6jBFH+s/3qF4bm6pqcz4eORRYHoYjKuDXtYtB W2vYb1HFWOX3XkOss4MKmTav4JRcAueECqKR4ix53lZBqQSgBQCrBE6TKnYNXEivF3I9 YVwZJVVUMnfzo8qfBmp6CHtZIO+iTa0kCIyWQSzg8utbyoeTWYZvCqRUe/qWHSI0jPon 8MtqQdzbD8pXPylIn/et5LegPdX1cEhfsrt+p8WqDJgKDdJkWb3sLrRt7KuvB/P40MxZ GDvQ==; dara=google.com ARC-Authentication-Results: i=1; gmr-mx.google.com; spf=pass (google.com: domain of amikan@ilbers.de designates 85.214.156.166 as permitted sender) smtp.mailfrom=amikan@ilbers.de Received: from shymkent.ilbers.de (shymkent.ilbers.de. [85.214.156.166]) by gmr-mx.google.com with ESMTPS id q23-20020a05600c331700b00418fd26d618si21943wmp.1.2024.05.07.23.42.33 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Tue, 07 May 2024 23:42:33 -0700 (PDT) Received-SPF: pass (google.com: domain of amikan@ilbers.de designates 85.214.156.166 as permitted sender) client-ip=85.214.156.166; Received: from user-B660.promwad.corp ([159.148.83.123]) (authenticated bits=0) by shymkent.ilbers.de (8.15.2/8.15.2/Debian-8+deb9u1) with ESMTPSA id 4486gU3u030717 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 8 May 2024 08:42:32 +0200 From: Anton Mikanovich To: isar-users@googlegroups.com Cc: Anton Mikanovich Subject: [PATCH 2/4] meta: Update OE-core libs to 5.0 LTS Date: Wed, 8 May 2024 09:42:21 +0300 Message-Id: <20240508064223.534237-3-amikan@ilbers.de> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240508064223.534237-1-amikan@ilbers.de> References: <20240508064223.534237-1-amikan@ilbers.de> MIME-Version: 1.0 X-Spam-Status: No, score=-0.9 required=5.0 tests=DKIMWL_WL_MED,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_EF,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,RCVD_IN_MSPIKE_H2,SPF_PASS,WEIRD_QUOTING autolearn=unavailable autolearn_force=no version=3.4.2 X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on shymkent.ilbers.de X-Original-Sender: amikan@ilbers.de X-Original-Authentication-Results: gmr-mx.google.com; spf=pass (google.com: domain of amikan@ilbers.de designates 85.214.156.166 as permitted sender) smtp.mailfrom=amikan@ilbers.de Precedence: list Mailing-list: list isar-users@googlegroups.com; contact isar-users+owners@googlegroups.com List-ID: X-Spam-Checked-In-Group: isar-users@googlegroups.com X-Google-Group-Id: 914930254986 List-Post: , List-Help: , List-Archive: , List-Unsubscribe: , X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= Based on v5.0.0 commit 7ef767d84d56b25498e45db83bb8f9d9caebeaf9. Signed-off-by: Anton Mikanovich --- meta/lib/buildstats.py | 88 +++++++++++--- meta/lib/oe/gpg_sign.py | 27 +++-- meta/lib/oe/patch.py | 225 ++++++++++++++++++++++++------------ meta/lib/oe/path.py | 6 +- meta/lib/oe/reproducible.py | 2 +- meta/lib/oe/sstatesig.py | 126 ++++++++++++-------- meta/lib/oe/terminal.py | 4 + 7 files changed, 325 insertions(+), 153 deletions(-) diff --git a/meta/lib/buildstats.py b/meta/lib/buildstats.py index 8627ed3c..fe801a28 100644 --- a/meta/lib/buildstats.py +++ b/meta/lib/buildstats.py @@ -1,4 +1,6 @@ # +# Imported from openembedded-core +# # SPDX-License-Identifier: GPL-2.0-only # # Implements system state sampling. Called by buildstats.bbclass. @@ -14,13 +16,27 @@ class SystemStats: bn = d.getVar('BUILDNAME') bsdir = os.path.join(d.getVar('BUILDSTATS_BASE'), bn) bb.utils.mkdirhier(bsdir) + file_handlers = [('diskstats', self._reduce_diskstats), + ('meminfo', self._reduce_meminfo), + ('stat', self._reduce_stat)] + + # Some hosts like openSUSE have readable /proc/pressure files + # but throw errors when these files are opened. Catch these error + # and ensure that the reduce_proc_pressure directory is not created. + if os.path.exists("/proc/pressure"): + try: + with open('/proc/pressure/cpu', 'rb') as source: + source.read() + pressuredir = os.path.join(bsdir, 'reduced_proc_pressure') + bb.utils.mkdirhier(pressuredir) + file_handlers.extend([('pressure/cpu', self._reduce_pressure), + ('pressure/io', self._reduce_pressure), + ('pressure/memory', self._reduce_pressure)]) + except Exception: + pass self.proc_files = [] - for filename, handler in ( - ('diskstats', self._reduce_diskstats), - ('meminfo', self._reduce_meminfo), - ('stat', self._reduce_stat), - ): + for filename, handler in (file_handlers): # The corresponding /proc files might not exist on the host. # For example, /proc/diskstats is not available in virtualized # environments like Linux-VServer. Silently skip collecting @@ -37,24 +53,32 @@ class SystemStats: # Last time that we sampled /proc data resp. recorded disk monitoring data. self.last_proc = 0 self.last_disk_monitor = 0 - # Minimum number of seconds between recording a sample. This - # becames relevant when we get called very often while many - # short tasks get started. Sampling during quiet periods + # Minimum number of seconds between recording a sample. This becames relevant when we get + # called very often while many short tasks get started. Sampling during quiet periods # depends on the heartbeat event, which fires less often. - self.min_seconds = 1 - - self.meminfo_regex = re.compile(b'^(MemTotal|MemFree|Buffers|Cached|SwapTotal|SwapFree):\s*(\d+)') - self.diskstats_regex = re.compile(b'^([hsv]d.|mtdblock\d|mmcblk\d|cciss/c\d+d\d+.*)$') + # By default, the Heartbeat events occur roughly once every second but the actual time + # between these events deviates by a few milliseconds, in most cases. Hence + # pick a somewhat arbitary tolerance such that we sample a large majority + # of the Heartbeat events. This ignores rare events that fall outside the minimum + # and may lead an extra sample in a given second every so often. However, it allows for fairly + # consistent intervals between samples without missing many events. + self.tolerance = 0.01 + self.min_seconds = 1.0 - self.tolerance + + self.meminfo_regex = re.compile(rb'^(MemTotal|MemFree|Buffers|Cached|SwapTotal|SwapFree):\s*(\d+)') + self.diskstats_regex = re.compile(rb'^([hsv]d.|mtdblock\d|mmcblk\d|cciss/c\d+d\d+.*)$') self.diskstats_ltime = None self.diskstats_data = None self.stat_ltimes = None + # Last time we sampled /proc/pressure. All resources stored in a single dict with the key as filename + self.last_pressure = {"pressure/cpu": None, "pressure/io": None, "pressure/memory": None} def close(self): self.monitor_disk.close() for _, output, _ in self.proc_files: output.close() - def _reduce_meminfo(self, time, data): + def _reduce_meminfo(self, time, data, filename): """ Extracts 'MemTotal', 'MemFree', 'Buffers', 'Cached', 'SwapTotal', 'SwapFree' and writes their values into a single line, in that order. @@ -75,7 +99,7 @@ class SystemStats: disk = linetokens[2] return self.diskstats_regex.match(disk) - def _reduce_diskstats(self, time, data): + def _reduce_diskstats(self, time, data, filename): relevant_tokens = filter(self._diskstats_is_relevant_line, map(lambda x: x.split(), data.split(b'\n'))) diskdata = [0] * 3 reduced = None @@ -104,10 +128,10 @@ class SystemStats: return reduced - def _reduce_nop(self, time, data): + def _reduce_nop(self, time, data, filename): return (time, data) - def _reduce_stat(self, time, data): + def _reduce_stat(self, time, data, filename): if not data: return None # CPU times {user, nice, system, idle, io_wait, irq, softirq} from first line @@ -126,14 +150,41 @@ class SystemStats: self.stat_ltimes = times return reduced + def _reduce_pressure(self, time, data, filename): + """ + Return reduced pressure: {avg10, avg60, avg300} and delta total compared to the previous sample + for the cpu, io and memory resources. A common function is used for all 3 resources since the + format of the /proc/pressure file is the same in each case. + """ + if not data: + return None + tokens = data.split(b'\n', 1)[0].split() + avg10 = float(tokens[1].split(b'=')[1]) + avg60 = float(tokens[2].split(b'=')[1]) + avg300 = float(tokens[3].split(b'=')[1]) + total = int(tokens[4].split(b'=')[1]) + + reduced = None + if self.last_pressure[filename]: + delta = total - self.last_pressure[filename] + reduced = (time, (avg10, avg60, avg300, delta)) + self.last_pressure[filename] = total + return reduced + def sample(self, event, force): + """ + Collect and log proc or disk_monitor stats periodically. + Return True if a new sample is collected and hence the value last_proc or last_disk_monitor + is changed. + """ + retval = False now = time.time() if (now - self.last_proc > self.min_seconds) or force: for filename, output, handler in self.proc_files: with open(os.path.join('/proc', filename), 'rb') as input: data = input.read() if handler: - reduced = handler(now, data) + reduced = handler(now, data, filename) else: reduced = (now, data) if reduced: @@ -150,6 +201,7 @@ class SystemStats: data + b'\n') self.last_proc = now + retval = True if isinstance(event, bb.event.MonitorDiskEvent) and \ ((now - self.last_disk_monitor > self.min_seconds) or force): @@ -159,3 +211,5 @@ class SystemStats: for dev, sample in event.disk_usage.items()]).encode('ascii') + b'\n') self.last_disk_monitor = now + retval = True + return retval diff --git a/meta/lib/oe/gpg_sign.py b/meta/lib/oe/gpg_sign.py index 6e35f3b7..7a9cec94 100644 --- a/meta/lib/oe/gpg_sign.py +++ b/meta/lib/oe/gpg_sign.py @@ -5,11 +5,12 @@ # """Helper module for GPG signing""" -import os import bb -import subprocess +import os import shlex +import subprocess +import tempfile class LocalSigner(object): """Class for handling local (on the build host) signing""" @@ -73,8 +74,6 @@ class LocalSigner(object): cmd += ['--homedir', self.gpg_path] if armor: cmd += ['--armor'] - if output_suffix: - cmd += ['-o', input_file + "." + output_suffix] if use_sha256: cmd += ['--digest-algo', "SHA256"] @@ -83,19 +82,27 @@ class LocalSigner(object): if self.gpg_version > (2,1,): cmd += ['--pinentry-mode', 'loopback'] - cmd += [input_file] - try: if passphrase_file: with open(passphrase_file) as fobj: passphrase = fobj.readline(); - job = subprocess.Popen(cmd, stdin=subprocess.PIPE, stderr=subprocess.PIPE) - (_, stderr) = job.communicate(passphrase.encode("utf-8")) + if not output_suffix: + output_suffix = 'asc' if armor else 'sig' + output_file = input_file + "." + output_suffix + with tempfile.TemporaryDirectory(dir=os.path.dirname(output_file)) as tmp_dir: + tmp_file = os.path.join(tmp_dir, os.path.basename(output_file)) + cmd += ['-o', tmp_file] + + cmd += [input_file] + + job = subprocess.Popen(cmd, stdin=subprocess.PIPE, stderr=subprocess.PIPE) + (_, stderr) = job.communicate(passphrase.encode("utf-8")) - if job.returncode: - bb.fatal("GPG exited with code %d: %s" % (job.returncode, stderr.decode("utf-8"))) + if job.returncode: + bb.fatal("GPG exited with code %d: %s" % (job.returncode, stderr.decode("utf-8"))) + os.rename(tmp_file, output_file) except IOError as e: bb.error("IO error (%s): %s" % (e.errno, e.strerror)) raise Exception("Failed to sign '%s'" % input_file) diff --git a/meta/lib/oe/patch.py b/meta/lib/oe/patch.py index f6cd934a..35734a0d 100644 --- a/meta/lib/oe/patch.py +++ b/meta/lib/oe/patch.py @@ -4,9 +4,11 @@ # SPDX-License-Identifier: GPL-2.0-only # +import os +import shlex +import subprocess import oe.path import oe.types -import subprocess class NotFoundError(bb.BBHandledException): def __init__(self, path): @@ -27,8 +29,6 @@ class CmdError(bb.BBHandledException): def runcmd(args, dir = None): - import pipes - if dir: olddir = os.path.abspath(os.curdir) if not os.path.exists(dir): @@ -37,7 +37,7 @@ def runcmd(args, dir = None): # print("cwd: %s -> %s" % (olddir, dir)) try: - args = [ pipes.quote(str(arg)) for arg in args ] + args = [ shlex.quote(str(arg)) for arg in args ] cmd = " ".join(args) # print("cmd: %s" % cmd) proc = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True) @@ -217,7 +217,7 @@ class PatchTree(PatchSet): with open(self.seriespath, 'w') as f: for p in patches: f.write(p) - + def Import(self, patch, force = None): """""" PatchSet.Import(self, patch, force) @@ -294,8 +294,9 @@ class PatchTree(PatchSet): self.Pop(all=True) class GitApplyTree(PatchTree): - patch_line_prefix = '%% original patch' - ignore_commit_prefix = '%% ignore' + notes_ref = "refs/notes/devtool" + original_patch = 'original patch' + ignore_commit = 'ignore' def __init__(self, dir, d): PatchTree.__init__(self, dir, d) @@ -452,7 +453,7 @@ class GitApplyTree(PatchTree): # Prepare git command cmd = ["git"] GitApplyTree.gitCommandUserOptions(cmd, commituser, commitemail) - cmd += ["commit", "-F", tmpfile] + cmd += ["commit", "-F", tmpfile, "--no-verify"] # git doesn't like plain email addresses as authors if author and '<' in author: cmd.append('--author="%s"' % author) @@ -461,44 +462,131 @@ class GitApplyTree(PatchTree): return (tmpfile, cmd) @staticmethod - def extractPatches(tree, startcommit, outdir, paths=None): + def addNote(repo, ref, key, value=None): + note = key + (": %s" % value if value else "") + notes_ref = GitApplyTree.notes_ref + runcmd(["git", "config", "notes.rewriteMode", "ignore"], repo) + runcmd(["git", "config", "notes.displayRef", notes_ref, notes_ref], repo) + runcmd(["git", "config", "notes.rewriteRef", notes_ref, notes_ref], repo) + runcmd(["git", "notes", "--ref", notes_ref, "append", "-m", note, ref], repo) + + @staticmethod + def removeNote(repo, ref, key): + notes = GitApplyTree.getNotes(repo, ref) + notes = {k: v for k, v in notes.items() if k != key and not k.startswith(key + ":")} + runcmd(["git", "notes", "--ref", GitApplyTree.notes_ref, "remove", "--ignore-missing", ref], repo) + for note, value in notes.items(): + GitApplyTree.addNote(repo, ref, note, value) + + @staticmethod + def getNotes(repo, ref): + import re + + note = None + try: + note = runcmd(["git", "notes", "--ref", GitApplyTree.notes_ref, "show", ref], repo) + prefix = "" + except CmdError: + note = runcmd(['git', 'show', '-s', '--format=%B', ref], repo) + prefix = "%% " + + note_re = re.compile(r'^%s(.*?)(?::\s*(.*))?$' % prefix) + notes = dict() + for line in note.splitlines(): + m = note_re.match(line) + if m: + notes[m.group(1)] = m.group(2) + + return notes + + @staticmethod + def commitIgnored(subject, dir=None, files=None, d=None): + if files: + runcmd(['git', 'add'] + files, dir) + cmd = ["git"] + GitApplyTree.gitCommandUserOptions(cmd, d=d) + cmd += ["commit", "-m", subject, "--no-verify"] + runcmd(cmd, dir) + GitApplyTree.addNote(dir, "HEAD", GitApplyTree.ignore_commit) + + @staticmethod + def extractPatches(tree, startcommits, outdir, paths=None): import tempfile import shutil tempdir = tempfile.mkdtemp(prefix='oepatch') try: - shellcmd = ["git", "format-patch", "--no-signature", "--no-numbered", startcommit, "-o", tempdir] - if paths: - shellcmd.append('--') - shellcmd.extend(paths) - out = runcmd(["sh", "-c", " ".join(shellcmd)], tree) - if out: - for srcfile in out.split(): - for encoding in ['utf-8', 'latin-1']: - patchlines = [] - outfile = None - try: - with open(srcfile, 'r', encoding=encoding) as f: - for line in f: - if line.startswith(GitApplyTree.patch_line_prefix): - outfile = line.split()[-1].strip() - continue - if line.startswith(GitApplyTree.ignore_commit_prefix): - continue - patchlines.append(line) - except UnicodeDecodeError: + for name, rev in startcommits.items(): + shellcmd = ["git", "format-patch", "--no-signature", "--no-numbered", rev, "-o", tempdir] + if paths: + shellcmd.append('--') + shellcmd.extend(paths) + out = runcmd(["sh", "-c", " ".join(shellcmd)], os.path.join(tree, name)) + if out: + for srcfile in out.split(): + # This loop, which is used to remove any line that + # starts with "%% original patch", is kept for backwards + # compatibility. If/when that compatibility is dropped, + # it can be replaced with code to just read the first + # line of the patch file to get the SHA-1, and the code + # below that writes the modified patch file can be + # replaced with a simple file move. + for encoding in ['utf-8', 'latin-1']: + patchlines = [] + try: + with open(srcfile, 'r', encoding=encoding, newline='') as f: + for line in f: + if line.startswith("%% " + GitApplyTree.original_patch): + continue + patchlines.append(line) + except UnicodeDecodeError: + continue + break + else: + raise PatchError('Unable to find a character encoding to decode %s' % srcfile) + + sha1 = patchlines[0].split()[1] + notes = GitApplyTree.getNotes(os.path.join(tree, name), sha1) + if GitApplyTree.ignore_commit in notes: continue - break - else: - raise PatchError('Unable to find a character encoding to decode %s' % srcfile) - - if not outfile: - outfile = os.path.basename(srcfile) - with open(os.path.join(outdir, outfile), 'w') as of: - for line in patchlines: - of.write(line) + outfile = notes.get(GitApplyTree.original_patch, os.path.basename(srcfile)) + + bb.utils.mkdirhier(os.path.join(outdir, name)) + with open(os.path.join(outdir, name, outfile), 'w') as of: + for line in patchlines: + of.write(line) finally: shutil.rmtree(tempdir) + def _need_dirty_check(self): + fetch = bb.fetch2.Fetch([], self.d) + check_dirtyness = False + for url in fetch.urls: + url_data = fetch.ud[url] + parm = url_data.parm + # a git url with subpath param will surely be dirty + # since the git tree from which we clone will be emptied + # from all files that are not in the subpath + if url_data.type == 'git' and parm.get('subpath'): + check_dirtyness = True + return check_dirtyness + + def _commitpatch(self, patch, patchfilevar): + output = "" + # Add all files + shellcmd = ["git", "add", "-f", "-A", "."] + output += runcmd(["sh", "-c", " ".join(shellcmd)], self.dir) + # Exclude the patches directory + shellcmd = ["git", "reset", "HEAD", self.patchdir] + output += runcmd(["sh", "-c", " ".join(shellcmd)], self.dir) + # Commit the result + (tmpfile, shellcmd) = self.prepareCommit(patch['file'], self.commituser, self.commitemail) + try: + shellcmd.insert(0, patchfilevar) + output += runcmd(["sh", "-c", " ".join(shellcmd)], self.dir) + finally: + os.remove(tmpfile) + return output + def _applypatch(self, patch, force = False, reverse = False, run = True): import shutil @@ -513,27 +601,26 @@ class GitApplyTree(PatchTree): return runcmd(["sh", "-c", " ".join(shellcmd)], self.dir) - # Add hooks which add a pointer to the original patch file name in the commit message reporoot = (runcmd("git rev-parse --show-toplevel".split(), self.dir) or '').strip() if not reporoot: raise Exception("Cannot get repository root for directory %s" % self.dir) - hooks_dir = os.path.join(reporoot, '.git', 'hooks') - hooks_dir_backup = hooks_dir + '.devtool-orig' - if os.path.lexists(hooks_dir_backup): - raise Exception("Git hooks backup directory already exists: %s" % hooks_dir_backup) - if os.path.lexists(hooks_dir): - shutil.move(hooks_dir, hooks_dir_backup) - os.mkdir(hooks_dir) - commithook = os.path.join(hooks_dir, 'commit-msg') - applyhook = os.path.join(hooks_dir, 'applypatch-msg') - with open(commithook, 'w') as f: - # NOTE: the formatting here is significant; if you change it you'll also need to - # change other places which read it back - f.write('echo "\n%s: $PATCHFILE" >> $1' % GitApplyTree.patch_line_prefix) - os.chmod(commithook, 0o755) - shutil.copy2(commithook, applyhook) + + patch_applied = True try: patchfilevar = 'PATCHFILE="%s"' % os.path.basename(patch['file']) + if self._need_dirty_check(): + # Check dirtyness of the tree + try: + output = runcmd(["git", "--work-tree=%s" % reporoot, "status", "--short"]) + except CmdError: + pass + else: + if output: + # The tree is dirty, no need to try to apply patches with git anymore + # since they fail, fallback directly to patch + output = PatchTree._applypatch(self, patch, force, reverse, run) + output += self._commitpatch(patch, patchfilevar) + return output try: shellcmd = [patchfilevar, "git", "--work-tree=%s" % reporoot] self.gitCommandUserOptions(shellcmd, self.commituser, self.commitemail) @@ -560,24 +647,14 @@ class GitApplyTree(PatchTree): except CmdError: # Fall back to patch output = PatchTree._applypatch(self, patch, force, reverse, run) - # Add all files - shellcmd = ["git", "add", "-f", "-A", "."] - output += runcmd(["sh", "-c", " ".join(shellcmd)], self.dir) - # Exclude the patches directory - shellcmd = ["git", "reset", "HEAD", self.patchdir] - output += runcmd(["sh", "-c", " ".join(shellcmd)], self.dir) - # Commit the result - (tmpfile, shellcmd) = self.prepareCommit(patch['file'], self.commituser, self.commitemail) - try: - shellcmd.insert(0, patchfilevar) - output += runcmd(["sh", "-c", " ".join(shellcmd)], self.dir) - finally: - os.remove(tmpfile) + output += self._commitpatch(patch, patchfilevar) return output + except: + patch_applied = False + raise finally: - shutil.rmtree(hooks_dir) - if os.path.lexists(hooks_dir_backup): - shutil.move(hooks_dir_backup, hooks_dir) + if patch_applied: + GitApplyTree.addNote(self.dir, "HEAD", GitApplyTree.original_patch, os.path.basename(patch['file'])) class QuiltTree(PatchSet): @@ -738,8 +815,9 @@ class NOOPResolver(Resolver): self.patchset.Push() except Exception: import sys - os.chdir(olddir) raise + finally: + os.chdir(olddir) # Patch resolver which relies on the user doing all the work involved in the # resolution, with the exception of refreshing the remote copy of the patch @@ -799,9 +877,9 @@ class UserResolver(Resolver): # User did not fix the problem. Abort. raise PatchError("Patch application failed, and user did not fix and refresh the patch.") except Exception: - os.chdir(olddir) raise - os.chdir(olddir) + finally: + os.chdir(olddir) def patch_path(url, fetch, workdir, expand=True): @@ -921,4 +999,3 @@ def should_apply(parm, d): return False, "applies to later version" return True, None - diff --git a/meta/lib/oe/path.py b/meta/lib/oe/path.py index 348feebe..68dcb595 100644 --- a/meta/lib/oe/path.py +++ b/meta/lib/oe/path.py @@ -125,7 +125,8 @@ def copyhardlinktree(src, dst): if os.path.isdir(src): if len(glob.glob('%s/.??*' % src)) > 0: source = './.??* ' - source += './*' + if len(glob.glob('%s/**' % src)) > 0: + source += './*' s_dir = src else: source = src @@ -171,6 +172,9 @@ def symlink(source, destination, force=False): if e.errno != errno.EEXIST or os.readlink(destination) != source: raise +def relsymlink(target, name, force=False): + symlink(os.path.relpath(target, os.path.dirname(name)), name, force=force) + def find(dir, **walkoptions): """ Given a directory, recurses into that directory, returning all files as absolute paths. """ diff --git a/meta/lib/oe/reproducible.py b/meta/lib/oe/reproducible.py index 448befce..06a4b5fc 100644 --- a/meta/lib/oe/reproducible.py +++ b/meta/lib/oe/reproducible.py @@ -1,5 +1,5 @@ # -# Copyright OpenEmbedded Contributors +# Imported from openembedded-core # # SPDX-License-Identifier: GPL-2.0-only # diff --git a/meta/lib/oe/sstatesig.py b/meta/lib/oe/sstatesig.py index acd47a05..63202204 100644 --- a/meta/lib/oe/sstatesig.py +++ b/meta/lib/oe/sstatesig.py @@ -6,6 +6,7 @@ import bb.siggen import bb.runqueue import oe +import netrc def sstate_rundepfilter(siggen, fn, recipename, task, dep, depname, dataCaches): # Return True if we should keep the dependency, False to drop it @@ -26,18 +27,15 @@ def sstate_rundepfilter(siggen, fn, recipename, task, dep, depname, dataCaches): return "/allarch.bbclass" in inherits def isImage(mc, fn): return "/image.bbclass" in " ".join(dataCaches[mc].inherits[fn]) - def isSPDXTask(task): - return task in ("do_create_spdx", "do_create_runtime_spdx") depmc, _, deptaskname, depmcfn = bb.runqueue.split_tid_mcfn(dep) mc, _ = bb.runqueue.split_mc(fn) - # Keep all dependencies between SPDX tasks in the signature. SPDX documents - # are linked together by hashes, which means if a dependent document changes, - # all downstream documents must be re-written (even if they are "safe" - # dependencies). - if isSPDXTask(task) and isSPDXTask(deptaskname): - return True + # We can skip the rm_work task signature to avoid running the task + # when we remove some tasks from the dependencie chain + # i.e INHERIT:remove = "create-spdx" will trigger the do_rm_work + if task == "do_rm_work": + return False # (Almost) always include our own inter-task dependencies (unless it comes # from a mcdepends). The exception is the special @@ -95,15 +93,6 @@ def sstate_lockedsigs(d): sigs[pn][task] = [h, siggen_lockedsigs_var] return sigs -class SignatureGeneratorOEBasic(bb.siggen.SignatureGeneratorBasic): - name = "OEBasic" - def init_rundepcheck(self, data): - self.abisaferecipes = (data.getVar("SIGGEN_EXCLUDERECIPES_ABISAFE") or "").split() - self.saferecipedeps = (data.getVar("SIGGEN_EXCLUDE_SAFE_RECIPE_DEPS") or "").split() - pass - def rundep_check(self, fn, recipename, task, dep, depname, dataCaches = None): - return sstate_rundepfilter(self, fn, recipename, task, dep, depname, dataCaches) - class SignatureGeneratorOEBasicHashMixIn(object): supports_multiconfig_datacaches = True @@ -116,6 +105,8 @@ class SignatureGeneratorOEBasicHashMixIn(object): self.lockedhashfn = {} self.machine = data.getVar("MACHINE") self.mismatch_msgs = [] + self.mismatch_number = 0 + self.lockedsigs_msgs = "" self.unlockedrecipes = (data.getVar("SIGGEN_UNLOCKED_RECIPES") or "").split() self.unlockedrecipes = { k: "" for k in self.unlockedrecipes } @@ -152,9 +143,10 @@ class SignatureGeneratorOEBasicHashMixIn(object): super().set_taskdata(data[3:]) def dump_sigs(self, dataCache, options): - sigfile = os.getcwd() + "/locked-sigs.inc" - bb.plain("Writing locked sigs to %s" % sigfile) - self.dump_lockedsigs(sigfile) + if 'lockedsigs' in options: + sigfile = os.getcwd() + "/locked-sigs.inc" + bb.plain("Writing locked sigs to %s" % sigfile) + self.dump_lockedsigs(sigfile) return super(bb.siggen.SignatureGeneratorBasicHash, self).dump_sigs(dataCache, options) @@ -199,6 +191,7 @@ class SignatureGeneratorOEBasicHashMixIn(object): #bb.warn("Using %s %s %s" % (recipename, task, h)) if h != h_locked and h_locked != unihash: + self.mismatch_number += 1 self.mismatch_msgs.append('The %s:%s sig is computed to be %s, but the sig is locked to %s in %s' % (recipename, task, h, h_locked, var)) @@ -213,10 +206,10 @@ class SignatureGeneratorOEBasicHashMixIn(object): return self.lockedhashes[tid] return super().get_stampfile_hash(tid) - def get_unihash(self, tid): + def get_cached_unihash(self, tid): if tid in self.lockedhashes and self.lockedhashes[tid] and not self._internal: return self.lockedhashes[tid] - return super().get_unihash(tid) + return super().get_cached_unihash(tid) def dump_sigtask(self, fn, task, stampbase, runtime): tid = fn + ":" + task @@ -227,6 +220,9 @@ class SignatureGeneratorOEBasicHashMixIn(object): def dump_lockedsigs(self, sigfile, taskfilter=None): types = {} for tid in self.runtaskdeps: + # Bitbake changed this to a tuple in newer versions + if isinstance(tid, tuple): + tid = tid[1] if taskfilter: if not tid in taskfilter: continue @@ -276,6 +272,15 @@ class SignatureGeneratorOEBasicHashMixIn(object): warn_msgs = [] error_msgs = [] sstate_missing_msgs = [] + info_msgs = None + + if self.lockedsigs: + if len(self.lockedsigs) > 10: + self.lockedsigs_msgs = "There are %s recipes with locked tasks (%s task(s) have non matching signature)" % (len(self.lockedsigs), self.mismatch_number) + else: + self.lockedsigs_msgs = "The following recipes have locked tasks:" + for pn in self.lockedsigs: + self.lockedsigs_msgs += " %s" % (pn) for tid in sq_data['hash']: if tid not in found: @@ -288,7 +293,9 @@ class SignatureGeneratorOEBasicHashMixIn(object): % (pn, taskname, sq_data['hash'][tid])) checklevel = d.getVar("SIGGEN_LOCKEDSIGS_TASKSIG_CHECK") - if checklevel == 'warn': + if checklevel == 'info': + info_msgs = self.lockedsigs_msgs + if checklevel == 'warn' or checklevel == 'info': warn_msgs += self.mismatch_msgs elif checklevel == 'error': error_msgs += self.mismatch_msgs @@ -299,6 +306,8 @@ class SignatureGeneratorOEBasicHashMixIn(object): elif checklevel == 'error': error_msgs += sstate_missing_msgs + if info_msgs: + bb.note(info_msgs) if warn_msgs: bb.warn("\n".join(warn_msgs)) if error_msgs: @@ -318,9 +327,21 @@ class SignatureGeneratorOEEquivHash(SignatureGeneratorOEBasicHashMixIn, bb.sigge self.method = data.getVar('SSTATE_HASHEQUIV_METHOD') if not self.method: bb.fatal("OEEquivHash requires SSTATE_HASHEQUIV_METHOD to be set") + self.max_parallel = int(data.getVar('BB_HASHSERVE_MAX_PARALLEL') or 1) + self.username = data.getVar("BB_HASHSERVE_USERNAME") + self.password = data.getVar("BB_HASHSERVE_PASSWORD") + if not self.username or not self.password: + try: + n = netrc.netrc() + auth = n.authenticators(self.server) + if auth is not None: + self.username, _, self.password = auth + except FileNotFoundError: + pass + except netrc.NetrcParseError as e: + bb.warn("Error parsing %s:%s: %s" % (e.filename, str(e.lineno), e.msg)) # Insert these classes into siggen's namespace so it can see and select them -bb.siggen.SignatureGeneratorOEBasic = SignatureGeneratorOEBasic bb.siggen.SignatureGeneratorOEBasicHash = SignatureGeneratorOEBasicHash bb.siggen.SignatureGeneratorOEEquivHash = SignatureGeneratorOEEquivHash @@ -334,14 +355,14 @@ def find_siginfo(pn, taskname, taskhashlist, d): if not taskname: # We have to derive pn and taskname key = pn - splitit = key.split('.bb:') - taskname = splitit[1] - pn = os.path.basename(splitit[0]).split('_')[0] - if key.startswith('virtual:native:'): - pn = pn + '-native' + if key.startswith("mc:"): + # mc::: + _, _, pn, taskname = key.split(':', 3) + else: + # : + pn, taskname = key.split(':', 1) hashfiles = {} - filedates = {} def get_hashval(siginfo): if siginfo.endswith('.siginfo'): @@ -349,6 +370,9 @@ def find_siginfo(pn, taskname, taskhashlist, d): else: return siginfo.rpartition('.')[2] + def get_time(fullpath): + return os.stat(fullpath).st_mtime + # First search in stamps dir localdata = d.createCopy() localdata.setVar('MULTIMACH_TARGET_SYS', '*') @@ -364,24 +388,21 @@ def find_siginfo(pn, taskname, taskhashlist, d): filespec = '%s.%s.sigdata.*' % (stamp, taskname) foundall = False import glob + bb.debug(1, "Calling glob.glob on {}".format(filespec)) for fullpath in glob.glob(filespec): match = False if taskhashlist: for taskhash in taskhashlist: if fullpath.endswith('.%s' % taskhash): - hashfiles[taskhash] = fullpath + hashfiles[taskhash] = {'path':fullpath, 'sstate':False, 'time':get_time(fullpath)} if len(hashfiles) == len(taskhashlist): foundall = True break else: - try: - filedates[fullpath] = os.stat(fullpath).st_mtime - except OSError: - continue hashval = get_hashval(fullpath) - hashfiles[hashval] = fullpath + hashfiles[hashval] = {'path':fullpath, 'sstate':False, 'time':get_time(fullpath)} - if not taskhashlist or (len(filedates) < 2 and not foundall): + if not taskhashlist or (len(hashfiles) < 2 and not foundall): # That didn't work, look in sstate-cache hashes = taskhashlist or ['?' * 64] localdata = bb.data.createCopy(d) @@ -390,6 +411,9 @@ def find_siginfo(pn, taskname, taskhashlist, d): localdata.setVar('TARGET_VENDOR', '*') localdata.setVar('TARGET_OS', '*') localdata.setVar('PN', pn) + # gcc-source is a special case, same as with local stamps above + if pn.startswith("gcc-source"): + localdata.setVar('PN', "gcc") localdata.setVar('PV', '*') localdata.setVar('PR', '*') localdata.setVar('BB_TASKHASH', hashval) @@ -401,24 +425,18 @@ def find_siginfo(pn, taskname, taskhashlist, d): localdata.setVar('SSTATE_EXTRAPATH', "${NATIVELSBSTRING}/") filespec = '%s.siginfo' % localdata.getVar('SSTATE_PKG') + bb.debug(1, "Calling glob.glob on {}".format(filespec)) matchedfiles = glob.glob(filespec) for fullpath in matchedfiles: actual_hashval = get_hashval(fullpath) if actual_hashval in hashfiles: continue - hashfiles[hashval] = fullpath - if not taskhashlist: - try: - filedates[fullpath] = os.stat(fullpath).st_mtime - except: - continue + hashfiles[actual_hashval] = {'path':fullpath, 'sstate':True, 'time':get_time(fullpath)} - if taskhashlist: - return hashfiles - else: - return filedates + return hashfiles bb.siggen.find_siginfo = find_siginfo +bb.siggen.find_siginfo_version = 2 def sstate_get_manifest_filename(task, d): @@ -463,11 +481,15 @@ def find_sstate_manifest(taskdata, taskdata2, taskname, d, multilibcache): pkgarchs.append('allarch') pkgarchs.append('${SDK_ARCH}_${SDK_ARCH}-${SDKPKGSUFFIX}') + searched_manifests = [] + for pkgarch in pkgarchs: manifest = d2.expand("${SSTATE_MANIFESTS}/manifest-%s-%s.%s" % (pkgarch, taskdata, taskname)) if os.path.exists(manifest): return manifest, d2 - bb.fatal("Manifest %s not found in %s (variant '%s')?" % (manifest, d2.expand(" ".join(pkgarchs)), variant)) + searched_manifests.append(manifest) + bb.fatal("The sstate manifest for task '%s:%s' (multilib variant '%s') could not be found.\nThe pkgarchs considered were: %s.\nBut none of these manifests exists:\n %s" + % (taskdata, taskname, variant, d2.expand(", ".join(pkgarchs)),"\n ".join(searched_manifests))) return None, d2 def OEOuthashBasic(path, sigfile, task, d): @@ -587,9 +609,9 @@ def OEOuthashBasic(path, sigfile, task, d): update_hash(" %10s" % pwd.getpwuid(s.st_uid).pw_name) update_hash(" %10s" % grp.getgrgid(s.st_gid).gr_name) except KeyError as e: - bb.warn("KeyError in %s" % path) msg = ("KeyError: %s\nPath %s is owned by uid %d, gid %d, which doesn't match " - "any user/group on target. This may be due to host contamination." % (e, path, s.st_uid, s.st_gid)) + "any user/group on target. This may be due to host contamination." % + (e, os.path.abspath(path), s.st_uid, s.st_gid)) raise Exception(msg).with_traceback(e.__traceback__) if include_timestamps: @@ -652,6 +674,10 @@ def OEOuthashBasic(path, sigfile, task, d): if f == 'fixmepath': continue process(os.path.join(root, f)) + + for dir in dirs: + if os.path.islink(os.path.join(root, dir)): + process(os.path.join(root, dir)) finally: os.chdir(prev_dir) diff --git a/meta/lib/oe/terminal.py b/meta/lib/oe/terminal.py index 8a3d84d3..2ae7a45a 100644 --- a/meta/lib/oe/terminal.py +++ b/meta/lib/oe/terminal.py @@ -104,6 +104,10 @@ class Rxvt(XTerminal): command = 'rxvt -T "{title}" -e {command}' priority = 1 +class URxvt(XTerminal): + command = 'urxvt -T "{title}" -e {command}' + priority = 1 + class Screen(Terminal): command = 'screen -D -m -t "{title}" -S devshell {command}'