我没有在这上面花太多时间(它还可以改进),这是我第一次使用tokenize模块,但这是我想出来的.
当我在寻找一个python解析器时,我找到了这个模块,它基本上解析python代码并对其进行分类,从那里你可以对它做任何你想做的事情.
from token import DEDENT, INDENT, NEWLINE
import tokenize
result = ''
names = {
'if': 'hehe',
'elif': 'haha',
'else': 'hihi',
# Add here all the other names, that you want to change, and don't worry about a name occurring inside a string it will not be changed
}
# open the script you want to encrypt in a tokenize file object
with tokenize.open('z.py') as f:
# create a new tokenize object and feed it the lines
tokens = tokenize.generate_tokens(f.readline)
# for every token in all the tokens in the file:
for token in tokens:
if names.get(token[1]): # token[1] is the string of the token i.e 'if', 'for', '\n', 'or' etc
result += names.get(token[1]) + ' '
elif token.type == NEWLINE or token[1] == INDENT or token.type == DEDENT:
result += token[1]
print(result)
else:
result += token[1] + ' '
with open('z.py', 'w') as f:
f.write(result)
update
前面的代码只进行了编码,只需稍加改动,就可以重复使用相同的代码来对脚本进行解码和编码:
from token import DEDENT, INDENT, NEWLINE
import tokenize
encode_name = {
'if': 'hehe',
'elif': 'haha',
'else': 'hihi',
}
def code(name, encode=True):
if encode:
names = name
else:
# flip the dict, keys become values and vice versa
names = {v: k for k, v in name.items()}
result = ''
with tokenize.open('z.py') as f:
tokens = tokenize.generate_tokens(f.readline)
for token in tokens:
if names.get(token[1]):
result += names.get(token[1]) + ' '
elif token.type == NEWLINE or token[1] == INDENT or token.type == DEDENT:
result += token[1]
else:
result += token[1] + ' '
with open('z.py', 'w') as f:
f.write(result)
code(encode_name, encode = False)
查看official docs的更多信息我自己不是专家,但请不要犹豫,在这里问任何问题.
我很乐意帮忙
祝你好运,编码快乐