test/py: Fix unicode handling for log filtering

At present the unicode filtering seems to get confused at times with
this error:

  UnicodeDecodeError: 'ascii' codec can't decode byte 0x80 in position
     32: ordinal not in range(128)

It seems to be due to self._nonprint being interpreted as UTF-8. Fix it
by using ordinals instead of characters, changing the string to set.

Signed-off-by: Simon Glass <sjg@chromium.org>
Reviewed-by: Stephen Warren <swarren@nvidia.com>
Tested-by: Michal Simek <michal.simek@xilinx.com>
This commit is contained in:
Simon Glass 2018-10-01 21:12:34 -06:00
parent ec9e0f4712
commit 87b05ee3a9

View File

@ -314,8 +314,9 @@ $(document).ready(function () {
# The set of characters that should be represented as hexadecimal codes in
# the log file.
_nonprint = ('%' + ''.join(chr(c) for c in range(0, 32) if c not in (9, 10)) +
''.join(chr(c) for c in range(127, 256)))
_nonprint = {ord('%')}
_nonprint.update({c for c in range(0, 32) if c not in (9, 10)})
_nonprint.update({c for c in range(127, 256)})
def _escape(self, data):
"""Render data format suitable for inclusion in an HTML document.
@ -331,7 +332,7 @@ $(document).ready(function () {
"""
data = data.replace(chr(13), '')
data = ''.join((c in self._nonprint) and ('%%%02x' % ord(c)) or
data = ''.join((ord(c) in self._nonprint) and ('%%%02x' % ord(c)) or
c for c in data)
data = cgi.escape(data)
return data