摘要(Abstract)
数据泄密(泄露)防护(Data leakage prevention, DLP),又称为“数据丢失防护”(Data Loss prevention, DLP),有时也称为“信息泄漏防护”(Information leakage prevention, ILP)。数据泄密防护(DLP)是通过一定的技术手段,防止企业的指定数据或信息资产以违反安全策略规定的形式流出企业的一种策略。DLP这一概念来源于国外,是目前国际上最主流的信息安全和数据防护手段。
中文名 DLP数据泄露防护系统 外文名 Data leakage prevention 别 称 数据丢失防护 功 效 防止企业的指定数据流出企业
目录
1 背景介绍
2 泄漏途径
3 防护原理
4 防护前景
5 防护方案
6 配置方式目前,数据泄漏的途径可归类为三种:在使用状态下的泄密、在存储状态下的泄密和在传输状态下的泄密。一般企业可通过安装防火墙、杀毒软件等方法来阻挡外部的入侵,但是事实上97%的信息泄密事件源于企业内部,所以就以上三种泄密途径分析,信息外泄的根源在于:
1、使用泄漏;1)操作失误导致技术数据泄漏或损坏;2)通过打印、剪切、复制、粘贴、另存为、重命名等操作泄漏数据。
2、存储泄漏:1) 数据中心、服务器、数据库的数据被随意下载、共享泄漏;2)离职人员通过U盘、CD/DVD、移动硬盘随意拷走机密资料;3)移动笔记本被盗、丢失或维修造成数据泄漏。
3、传输泄漏:1)通过email、QQ、MSN等轻易传输机密资料;2)通过网络监听、拦截等方式篡改、伪造传输数据。
主题(Topic)
项目(Project)
jvhoven/webpack-chunk-assets
themgoncalves/react-loadable-ssr-addon
zhgabor/mp3-concat
7rin0/comprocess
parallel-ml/Capella-FPL19-SplitNetworksOnFPGA
misl/openapi-validator-maven-plugin
sonjoydabnath/nodejs-graphql-app
Zanderwohl/Chunks
TatianaFlores/chunks
vardecab/chunks
reactgular/chunks
halilozercan/pget
UstymUkhman/vite-plugin-glsl
TomDeneire/chunkhunter
taylor-vann/parsley
tsangwpx/chunksum
alxlu/suppress-chunks-webpack-plugin
ropenscilabs/testrmd
NatGr/annotate_audio
teamable-software/css-chunks-html-webpack-plugin
Mehni/ShipChunks
eihwaz/anvil-region
aimenux/BatchLinqDemo
kevinstuffandthings/termichunk
rookiedk/Chunka-Codes
TomDeneire/chunkhunterweb
SergioLaRosa/splitnjoin
test_main() File "cnn_test_auto.py", line 119, in test_main loss,acuracy = test(data_path,generate_test, model_path) File "cnn_test_auto.py", line 76, in test loss, accuracy = my_spatial_model.evaluate_generator(generate_test, steps=test_step) #98需要才能重新确定值的大小 File "D:\python 3.6.4\lib\site-packages\keras\legacy\interfaces.py", line 91, in wrapper return func(*args, **kwargs) File "D:\python 3.6.4\lib\site-packages\keras\engine\training.py", line 1472, in evaluate_generator verbose=verbose) File "D:\python 3.6.4\lib\site-packages\keras\engine\training_generator.py", line 346, in evaluate_generator outs = model.test_on_batch(x, y, sample_weight=sample_weight) File "D:\python 3.6.4\lib\site-packages\keras\engine\training.py", line 1256, in test_on_batch outputs = self.test_function(ins) File "D:\python 3.6.4\lib\site-packages\keras\backend\tensorflow_backend.py", line 2715, in __call__ return self._call(inputs) File "D:\python 3.6.4\lib\site-packages\keras\backend\tensorflow_backend.py", line 2675, in _call fetched = self._callable_fn(*array_vals) File "D:\python 3.6.4\lib\site-packages\tensorflow\python\client\session.py", line 1439, in __call__ run_metadata_ptr) File "D:\python 3.6.4\lib\site-packages\tensorflow\python\framework\errors_impl.py", line 528, in __exit__ c_api.TF_GetCode(self.status.status)) tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[128,64,1,1] and type float on /job :localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc [[{{node block2_sepconv1_1/separable_conv2d}}]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. [[{{node metrics_33/acc/Mean_1}}]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info." class="topic-tag topic-tag-link">
out-of-GPU-memoery
SNN01/splitnjoiny
theboxahaan/stre-ami-ing
全部项目