【图像质量评价】Toward Content Independent No-reference Image Quality Assessment Using Deep Learning --2019
本研究基于AlexNet网络结构进行改进与应用分析。首先,在直接引用部分提到学习型神经网络(如CNN)已成为主要研究方向,并强调了深度神经网络需要大量标注样本及复杂的特征提取过程[17]。其次,在作者的观点中指出通过迁移学习方法(即先在ImageNet上训练再微调到LIVE数据集)能够有效提升模型性能[18];此外还提到深度神经网络对标注样本的需求较高以及图像处理中的裁剪问题[19]。
在个人观点中提出了一种基于AlexNet的改进方案:通过调整输入图像大小以适应全连接层设计;并引入L2正则化、Dropout和ReLU激活函数以增强模型泛化能力[20];具体而言,在我的实验中采用如下步骤:首先利用AlexNet进行特征提取;然后在外接的任务特定段进行微调训练以优化预测分数[3](如图所示)。
一、直接引用
Particularly, learning-based NR methods have become focal areas of recent research, with CNN-based end-to-end approaches having garnered significant attention within the domain of NR techniques.
The AlexNet architecture, a renowned deep convolutional neural network (DCNN), is composed of 23 layers. Among these, five convolutional (CONV) layers are dedicated to feature extraction, while three fully connected (FC) layers systematically project the extracted features onto categorical outputs.
二、作者的观点
The author adopted a transfer learning-based model, where the network initially underwent pre-training on the ImageNet dataset [17], followed by further optimization using the LIVE dataset [18].
In particular, it is widely recognized that the training process for especially deep convolutional neural networks (CNNs) necessitates a substantial number of labeled samples, which presents challenges in practical applications.
P3: [ Images are cropped into patches of 227 * 227 pixels ]
P4: [We set a stride of 56 when extracting patches from the images]
P3: [using a regulated L2 loss]
三、我的观点
(可以借鉴 基于AlexNet网络提取图像特征的属性;然而当输入图像尺寸发生变化时,则相应的全连接层也会随之调整结构)
在模型训练中采用L2正则化技术以防止模型过拟合,并结合Dropout机制与ReLU激活函数以进一步提升模型性能。



为了充分利用AlexNet中保存的知识,我们仅保留了最后一层FC-1000层,并将其以及softmax层和分类器从我们的配置中移除。
该系统采用AlexNet网络架构,并结合微调技术实现图像特征的精准获取。随后将特定任务模块连接于其上完成预测分数的学习训练过程。其中特定任务模块的具体网络结构如下:

