Openwall GNU/*/Linux - a small security-enhanced Linux distro for servers
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Tue, 6 Dec 2016 10:26:14 -0800
From: Grant Murphy <>
Subject: Re: Opensource Python whitebox code analysis tool recommendations

On Tue, Dec 6, 2016 at 9:02 AM, Fiedler Roman <>

> Hello list,
> I just stubled over effects of following programming error due to unwanted
> singleton in Python, bypassing intended process restrictions (allowed
> number
> of elements in my case) and of course data corruption:
> class A:
>   def __init__(self, value=[]):
>     self.value=value
>     self.valueCloned=value[:]
>   def show(self):
>     print 'IDs value %x, cloned %x' % (id(self.value),
> id(self.valueCloned))
>   def append(self, data):
>     self.value.append(data)
> # Keep reference to avoid garbage collection interference.
> objFirst=A()
> objNext=A()
> # Check references to prohibit optimization.
> if objFirst==objNext: raise Exception('Impossible')
> As this type of error seems to be more common in code, at least according
> to
> grep, are there tool recommendations to do automatic analysis of code?
> It should trace all non-trivial (not None, int, float, str, ...)
> constructor
> arguments assignments and catch at least problematic invocations like
> "self.value.append". A problem is, that in many cases just existence of
> constructor like the one before does not automatically lead to
> corruption/concurrency issues. For example the tool should not trigger on
> this
> (older but still in use) version of django_common/ or at least, when
> triggering, only at "json.dumps()".
> class JsonResponse(HttpResponse):
>   def __init__(self, data={ }, errors=[ ], success=True):
>     """
>     data is a map, errors a list
>     """
>     json = json_response(data=data, errors=errors, success=success)
>     super(JsonResponse, self).__init__(json, content_type='application/
> json')
> def json_response(data={ }, errors=[ ], success=True):
>   data.update({
>     'errors': errors,
>     'success': len(errors) == 0 and success,
>   })
>   return json.dumps(data)
> Due to weak typing, it might be too hard to catch all problematic
> locations,
> e.g. field modified in subclass. Without source code analysis tools
> available
> to do such checks, I would also try out any approaches where the argument
> value is made immutable thus leading to crash in testbed.
> It would be great, if the tool would do the whole analysis more from the
> security than code quality perspective: it is more interesting to audit own
> code and referenced/redistributed third party stuff for things that "are
> very
> likely to be problematic/vulnerable" than have a quality tool recommending
> to
> change all those lines, which is not quite realistic.
> Kind regards,
> Roman

You could check out Bandit:

I'm not sure it quite fits what you're after could be worth a look.

Powered by blists - more mailing lists

Your e-mail address:

Please check out the Open Source Software Security Wiki, which is counterpart to this mailing list.

Powered by Openwall GNU/*/Linux - Powered by OpenVZ