然后使用我B哥的webalivescan扫描
https://github.com/broken5/WebAliveScan
写了个自己字典生成规则:脚本
infolist = []
dictionaryFile = open('C:/Users/challenger/Desktop/工具/test.txt', 'w')
try:
# 读取个人信息文件,并按行存入lines
informationFile = open('C:/Users/challenger/Desktop/工具/whatweb_set.txt', 'r', encoding='utf-8')
lines = informationFile.readlines() # 依次读取每行
#lines=dictionaryFile.readlines()
for line in lines:
line = line.strip('\\n\\r')
if "bak" in line:
infolist.append("{'path': '" + line + "', 'status': 200, 'type': 'application/octet-stream'},")
else:
infolist.append("{'path': '" + line + "', 'status': 200, 'type': 'html'},")
#infolist.extend("{'path': '"+line+"', 'status': 200, 'type': 'html'},")
print(infolist)
except Exception as e:
print(e + "\\n")
print("Read person_information error!")
for a in range(len(infolist)):
dictionaryFile.write(infolist[a] + '\\n')
然后又是相同套路从fofa寻找相同系统,有一百多个站基本备份就是稳得webalivescan再扫一波备份,扫到一个bin.rar
webalivescan再扫一波备份,扫到一个bin.rar
代码审计