Skip to content

Commit 519ee40

Browse files
committed
rename promise_nfr dataset
1 parent 4a0b5e8 commit 519ee40

File tree

4 files changed

+9
-9
lines changed

4 files changed

+9
-9
lines changed

Code/Task1_to_3_original_Promise_NFR_dataset/Task1_F_NFR_classification.ipynb

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -249,7 +249,7 @@
249249
"config_data = Config(\n",
250250
" root_folder = '.', # where is the root folder? Keep it that way if you want to load from Google Drive\n",
251251
" data_folder = '/', # where is the folder containing the datasets; relative to root\n",
252-
" train_data = ['Raw-DataTrack-Huang_all.csv'], # dataset file to use\n",
252+
" train_data = ['promise_nfr.csv'], # dataset file to use\n",
253253
" label_column = clazz,\n",
254254
" log_folder_name = '/log/',\n",
255255
" log_file = clazz + '_' + Fold(config.fold).name + '_' + Sampling(config.sampling).name + '_classifierPredictions_' + datetime.now().strftime('%Y%m%d-%H%M') + '.txt', # log-file name (make sure log folder exists)\n",
@@ -260,7 +260,7 @@
260260
"\n",
261261
" orig_data_set_zip = 'https://zenodo.org/record/3833661/files/NoRBERT_RE20_Paper65.zip', # link to the data set (on zenodo). DO NOT CHANGE!\n",
262262
" orig_data_zip_name = 'NoRBERT_RE20_Paper65.zip', # DO NOT CHANGE\n",
263-
" orig_data_file_in_zip = r'/Code/Task1_to_3_original_Promise_NFR_dataset/Raw-DataTrack-Huang_all.csv', # DO NOT CHANGE\n",
263+
" orig_data_file_in_zip = r'/Code/Task1_to_3_original_Promise_NFR_dataset/promise_nfr.csv', # DO NOT CHANGE\n",
264264
" \n",
265265
" # Project split to use, either p-fold (as in Dalpiaz) or loPo\n",
266266
" #project_fold = [[3, 9, 11], [1, 5, 12], [6, 10, 13], [1, 8, 14], [3, 12, 15], [2, 5, 11], [6, 9, 14], [7, 8, 13], [2, 4, 15], [4, 7, 10] ], # p-fold\n",
@@ -280,7 +280,7 @@
280280
"id": "SVU_viFX-ezy"
281281
},
282282
"source": [
283-
"To import the dataset, first we have to either load the data set from zenodo (and unzip the needed file) or connect to our Google drive (if data should be loaded from gdrive). To connect to our Google drive, we have to authenticating the access and mount the drive."
283+
"To import the dataset, first we have to either load the data set from zenodo (and unzip the needed file) or connect to our Google drive (if data should be loaded from gdrive). To connect to our Google drive, we have to authenticate the access and mount the drive."
284284
]
285285
},
286286
{

Code/Task1_to_3_original_Promise_NFR_dataset/Task2_3_Multiclass_classification_of_NFR_subclasses.ipynb

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -243,7 +243,7 @@
243243
"config_data = Config(\n",
244244
" root_folder = '.', # where is the root folder? Keep it that way if you want to load from Google Drive\n",
245245
" data_folder = '/', # where is the folder containing the datasets; relative to root\n",
246-
" train_data = ['Raw-DataTrack-Huang_all.csv'], # dataset file to use\n",
246+
" train_data = ['promise_nfr.csv'], # dataset file to use\n",
247247
" label_column = clazz,\n",
248248
" log_folder_name = '/log/',\n",
249249
" log_file = clazz + '_' + Fold(config.fold).name + '_classifierPredictions_' + datetime.now().strftime('%Y%m%d-%H%M') + '.txt', # log-file name (make sure log folder exists)\n",
@@ -254,7 +254,7 @@
254254
" \n",
255255
" orig_data_set_zip = 'https://zenodo.org/record/3833661/files/NoRBERT_RE20_Paper65.zip', # link to the data set (on zenodo). DO NOT CHANGE!\n",
256256
" orig_data_zip_name = 'NoRBERT_RE20_Paper65.zip', # DO NOT CHANGE\n",
257-
" orig_data_file_in_zip = r'/Code/Task1_to_3_original_Promise_NFR_dataset/Raw-DataTrack-Huang_all.csv', # DO NOT CHANGE\n",
257+
" orig_data_file_in_zip = r'/Code/Task1_to_3_original_Promise_NFR_dataset/promise_nfr.csv', # DO NOT CHANGE\n",
258258
" \n",
259259
" # Project split to use, either p-fold (as in Dalpiaz) or loPo\n",
260260
" #project_fold = [[3, 9, 11], [1, 5, 12], [6, 10, 13], [1, 8, 14], [3, 12, 15], [2, 5, 11], [6, 9, 14], [7, 8, 13], [2, 4, 15], [4, 7, 10] ], # p-fold\n",
@@ -276,7 +276,7 @@
276276
"id": "SVU_viFX-ezy"
277277
},
278278
"source": [
279-
"To import the dataset, first we have to either load the data set from zenodo (and unzip the needed file) or connect to our Google drive (if data should be loaded from gdrive). To connect to our Google drive, we have to authenticating the access and mount the drive."
279+
"To import the dataset, first we have to either load the data set from zenodo (and unzip the needed file) or connect to our Google drive (if data should be loaded from gdrive). To connect to our Google drive, we have to authenticate the access and mount the drive."
280280
]
281281
},
282282
{

Code/Task1_to_3_original_Promise_NFR_dataset/Task2_Most_Frequent_NFR_classes_binary_classification.ipynb

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -251,7 +251,7 @@
251251
"config_data = Config(\n",
252252
" root_folder = '.', # where is the root folder? Keep it that way if you want to load from Google Drive\n",
253253
" data_folder = '/', # where is the folder containing the datasets; relative to root\n",
254-
" train_data = ['Raw-DataTrack-Huang_all.csv'], # dataset file to use\n",
254+
" train_data = ['promise_nfr.csv'], # dataset file to use\n",
255255
" label_column = clazz,\n",
256256
" log_folder_name = '/log/',\n",
257257
" log_file = clazz + '_' + Fold(config.fold).name + '_' + Sampling(config.sampling).name + '_classifierPredictions_' + datetime.now().strftime('%Y%m%d-%H%M') + '.txt', # log-file name (make sure log folder exists)\n",
@@ -262,7 +262,7 @@
262262
" \n",
263263
" orig_data_set_zip = 'https://zenodo.org/record/3833661/files/NoRBERT_RE20_Paper65.zip', # link to the data set (on zenodo). DO NOT CHANGE!\n",
264264
" orig_data_zip_name = 'NoRBERT_RE20_Paper65.zip', # DO NOT CHANGE\n",
265-
" orig_data_file_in_zip = r'/Code/Task1_to_3_original_Promise_NFR_dataset/Raw-DataTrack-Huang_all.csv', # DO NOT CHANGE\n",
265+
" orig_data_file_in_zip = r'/Code/Task1_to_3_original_Promise_NFR_dataset/promise_nfr.csv', # DO NOT CHANGE\n",
266266
" \n",
267267
" # Project split to use, either p-fold (as in Dalpiaz) or loPo\n",
268268
" #project_fold = [[3, 9, 11], [1, 5, 12], [6, 10, 13], [1, 8, 14], [3, 12, 15], [2, 5, 11], [6, 9, 14], [7, 8, 13], [2, 4, 15], [4, 7, 10] ], # p-fold\n",
@@ -286,7 +286,7 @@
286286
"id": "SVU_viFX-ezy"
287287
},
288288
"source": [
289-
"To import the dataset, first we have to either load the data set from zenodo (and unzip the needed file) or connect to our Google drive (if data should be loaded from gdrive). To connect to our Google drive, we have to authenticating the access and mount the drive."
289+
"To import the dataset, first we have to either load the data set from zenodo (and unzip the needed file) or connect to our Google drive (if data should be loaded from gdrive). To connect to our Google drive, we have to authenticate the access and mount the drive."
290290
]
291291
},
292292
{

Code/Task1_to_3_original_Promise_NFR_dataset/Raw-DataTrack-Huang_all.csv renamed to Code/Task1_to_3_original_Promise_NFR_dataset/promise_nfr.csv

File renamed without changes.

0 commit comments

Comments
 (0)