Added support for creating Directory nodes in EPAS. #8034
parent
4791897578
commit
7bae1eb663
|
|
@ -0,0 +1,87 @@
|
||||||
|
.. _directory_dialog:
|
||||||
|
|
||||||
|
*************************
|
||||||
|
`Directory Dialog`:index:
|
||||||
|
*************************
|
||||||
|
|
||||||
|
Use the Directory dialog to Create an alias for a file system directory path.
|
||||||
|
To create directories, you must have the CREATE ANY DIRECTORY system privilege.
|
||||||
|
When you create a directory, you are automatically granted READ and WRITE privileges
|
||||||
|
on the directory, and you can grant READ and WRITE privileges to other users and roles.
|
||||||
|
The superuser can also grant these privileges to other users and roles.
|
||||||
|
|
||||||
|
Please note that directories are supported when connected to EDB Postgres Advanced Server.
|
||||||
|
For more information about using directories, please see the EDB Postgres Advanced Server Guide, available at:
|
||||||
|
|
||||||
|
https://www.enterprisedb.com/docs/epas/latest/epas_compat_sql/
|
||||||
|
|
||||||
|
|
||||||
|
The *Directory* dialog organizes the definition of a directory through the
|
||||||
|
following tabs: *General*, *Definition*, *Security*, and *SQL*.
|
||||||
|
The *SQL* tab displays the SQL code generated by dialog selections.
|
||||||
|
|
||||||
|
.. image:: images/directory_general.png
|
||||||
|
:alt: Directory general tab
|
||||||
|
:align: center
|
||||||
|
|
||||||
|
Use the fields on the *General* tab to specify directory attributes:
|
||||||
|
|
||||||
|
* Use the *Name* field to add a directory alias name. This name will be displayed in the object explorer.
|
||||||
|
* Select the owner of the directory from the drop-down listbox in the *Owner*
|
||||||
|
field.
|
||||||
|
|
||||||
|
Click the *Definition* tab to continue.
|
||||||
|
|
||||||
|
.. image:: images/directory_definition.png
|
||||||
|
:alt: Directory dialog definition tab
|
||||||
|
:align: center
|
||||||
|
|
||||||
|
* Use the *Location* field to specify a fully qualified directory path represented
|
||||||
|
by the alias name. The CREATE DIRECTORY command doesn't create the operating system directory.
|
||||||
|
The physical directory must be created independently using operating system commands.
|
||||||
|
|
||||||
|
Click the *Security* tab to continue.
|
||||||
|
|
||||||
|
.. image:: images/directory_security.png
|
||||||
|
:alt: Directory dialog security tab
|
||||||
|
:align: center
|
||||||
|
|
||||||
|
NOTE:- This *Security* tab will be only available for EPAS 17.
|
||||||
|
|
||||||
|
Use the *Security* tab to assign privileges for the directory.
|
||||||
|
|
||||||
|
Use the *Privileges* panel to assign security privileges. Click the *Add* icon
|
||||||
|
(+) to assign a set of privileges:
|
||||||
|
|
||||||
|
* Select the name of the role from the drop-down listbox in the *Grantee* field.
|
||||||
|
* The current user, who is the default grantor for granting the privilege, is displayed in the *Grantor* field.
|
||||||
|
* Click inside the *Privileges* field. Check the boxes to the left of one or
|
||||||
|
more privileges to grant the selected privileges to the specified user.
|
||||||
|
|
||||||
|
Click the *Add* icon to assign additional sets of privileges; to discard a
|
||||||
|
privilege, click the trash icon to the left of the row and confirm deletion in
|
||||||
|
the *Delete Row* popup.
|
||||||
|
|
||||||
|
Click the *SQL* tab to continue.
|
||||||
|
|
||||||
|
Your entries in the *Directory* dialog generate a SQL command (see an example
|
||||||
|
below). Use the *SQL* tab for review; revisit or switch tabs to make any changes
|
||||||
|
to the SQL command.
|
||||||
|
|
||||||
|
Example
|
||||||
|
*******
|
||||||
|
|
||||||
|
The following is an example of the sql command generated by user selections in
|
||||||
|
the *Directory* dialog:
|
||||||
|
|
||||||
|
.. image:: images/directory_sql.png
|
||||||
|
:alt: Directory dialog sql tab
|
||||||
|
:align: center
|
||||||
|
|
||||||
|
The example shown demonstrates creating a directory named *test1*. It has a
|
||||||
|
*location* value equal to */home/test_dir*.
|
||||||
|
|
||||||
|
* Click the *Info* button (i) to access online help.
|
||||||
|
* Click the *Save* button to save work.
|
||||||
|
* Click the *Close* button to exit without saving work.
|
||||||
|
* Click the *Reset* button to restore configuration parameters.
|
||||||
Binary file not shown.
|
After Width: | Height: | Size: 37 KiB |
Binary file not shown.
|
After Width: | Height: | Size: 42 KiB |
Binary file not shown.
|
After Width: | Height: | Size: 62 KiB |
Binary file not shown.
|
After Width: | Height: | Size: 79 KiB |
|
|
@ -20,4 +20,5 @@ database, right-click on the *Databases* node, and select *Create Database...*
|
||||||
tablespace_dialog
|
tablespace_dialog
|
||||||
replica_nodes_dialog
|
replica_nodes_dialog
|
||||||
pgd_replication_group_dialog
|
pgd_replication_group_dialog
|
||||||
role_reassign_dialog
|
role_reassign_dialog
|
||||||
|
directory_dialog
|
||||||
|
|
@ -13,7 +13,7 @@ connected to EDB Postgres Advanced Server; for more information about using
|
||||||
resource groups, please see the EDB Postgres Advanced Server Guide, available
|
resource groups, please see the EDB Postgres Advanced Server Guide, available
|
||||||
at:
|
at:
|
||||||
|
|
||||||
http://www.enterprisedb.com/
|
https://www.enterprisedb.com/docs/epas/latest/epas_compat_sql/
|
||||||
|
|
||||||
Fields used to create a resource group are located on the *General* tab. The
|
Fields used to create a resource group are located on the *General* tab. The
|
||||||
*SQL* tab displays the SQL code generated by your selections on the *Resource
|
*SQL* tab displays the SQL code generated by your selections on the *Resource
|
||||||
|
|
|
||||||
|
|
@ -331,6 +331,9 @@ class ServerModule(sg.ServerGroupPluginModule):
|
||||||
from .tablespaces import blueprint as module
|
from .tablespaces import blueprint as module
|
||||||
self.submodules.append(module)
|
self.submodules.append(module)
|
||||||
|
|
||||||
|
from .directories import blueprint as module
|
||||||
|
self.submodules.append(module)
|
||||||
|
|
||||||
from .replica_nodes import blueprint as module
|
from .replica_nodes import blueprint as module
|
||||||
self.submodules.append(module)
|
self.submodules.append(module)
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -5,7 +5,7 @@
|
||||||
"is_positive_test": true,
|
"is_positive_test": true,
|
||||||
"inventory_data": {
|
"inventory_data": {
|
||||||
"server_min_version": 100000,
|
"server_min_version": 100000,
|
||||||
"skip_msg": "Identity columns are not supported by PPAS/PG 10.0 and below."
|
"skip_msg": "Identity columns are not supported by EPAS/PG 10.0 and below."
|
||||||
},
|
},
|
||||||
"test_data": {
|
"test_data": {
|
||||||
"description": "Create Table API Test",
|
"description": "Create Table API Test",
|
||||||
|
|
@ -111,7 +111,7 @@
|
||||||
"is_positive_test": true,
|
"is_positive_test": true,
|
||||||
"inventory_data": {
|
"inventory_data": {
|
||||||
"server_min_version": 110000,
|
"server_min_version": 110000,
|
||||||
"skip_msg": "Hash Partition are not supported by PPAS/PG 11.0 and below."
|
"skip_msg": "Hash Partition are not supported by EPAS/PG 11.0 and below."
|
||||||
},
|
},
|
||||||
"test_data": {
|
"test_data": {
|
||||||
"is_partitioned": true,
|
"is_partitioned": true,
|
||||||
|
|
@ -150,7 +150,7 @@
|
||||||
"is_positive_test": true,
|
"is_positive_test": true,
|
||||||
"inventory_data": {
|
"inventory_data": {
|
||||||
"server_min_version": 100000,
|
"server_min_version": 100000,
|
||||||
"skip_msg": "Partitioned table are not supported by PPAS/PG 10.0 and below."
|
"skip_msg": "Partitioned table are not supported by EPAS/PG 10.0 and below."
|
||||||
},
|
},
|
||||||
"test_data": {
|
"test_data": {
|
||||||
"is_partitioned": true,
|
"is_partitioned": true,
|
||||||
|
|
@ -204,7 +204,7 @@
|
||||||
"is_positive_test": true,
|
"is_positive_test": true,
|
||||||
"inventory_data": {
|
"inventory_data": {
|
||||||
"server_min_version": 100000,
|
"server_min_version": 100000,
|
||||||
"skip_msg": "Partitioned table are not supported by PPAS/PG 10.0 and below."
|
"skip_msg": "Partitioned table are not supported by EPAS/PG 10.0 and below."
|
||||||
},
|
},
|
||||||
"test_data": {
|
"test_data": {
|
||||||
"is_partitioned": true,
|
"is_partitioned": true,
|
||||||
|
|
@ -241,7 +241,7 @@
|
||||||
"is_positive_test": true,
|
"is_positive_test": true,
|
||||||
"inventory_data": {
|
"inventory_data": {
|
||||||
"server_min_version": 100000,
|
"server_min_version": 100000,
|
||||||
"skip_msg": "Partitioned table are not supported by PPAS/PG 10.0 and below."
|
"skip_msg": "Partitioned table are not supported by EPAS/PG 10.0 and below."
|
||||||
},
|
},
|
||||||
"test_data": {
|
"test_data": {
|
||||||
"is_partitioned": true,
|
"is_partitioned": true,
|
||||||
|
|
@ -297,7 +297,7 @@
|
||||||
"is_positive_test": true,
|
"is_positive_test": true,
|
||||||
"inventory_data": {
|
"inventory_data": {
|
||||||
"server_min_version": 110000,
|
"server_min_version": 110000,
|
||||||
"skip_msg": "Partitioned table are not supported by PPAS/PG 10.0 and below."
|
"skip_msg": "Partitioned table are not supported by EPAS/PG 10.0 and below."
|
||||||
},
|
},
|
||||||
"test_data": {
|
"test_data": {
|
||||||
"is_partitioned": true,
|
"is_partitioned": true,
|
||||||
|
|
@ -344,7 +344,7 @@
|
||||||
"is_positive_test": true,
|
"is_positive_test": true,
|
||||||
"inventory_data": {
|
"inventory_data": {
|
||||||
"server_min_version": 100000,
|
"server_min_version": 100000,
|
||||||
"skip_msg": "Partitioned table are not supported by PPAS/PG 10.0 and below."
|
"skip_msg": "Partitioned table are not supported by EPAS/PG 10.0 and below."
|
||||||
},
|
},
|
||||||
"test_data": {
|
"test_data": {
|
||||||
"is_partitioned": true,
|
"is_partitioned": true,
|
||||||
|
|
@ -383,7 +383,7 @@
|
||||||
"is_positive_test": true,
|
"is_positive_test": true,
|
||||||
"inventory_data": {
|
"inventory_data": {
|
||||||
"server_min_version": 100000,
|
"server_min_version": 100000,
|
||||||
"skip_msg": "Partitioned table are not supported by PPAS/PG 10.0 and below."
|
"skip_msg": "Partitioned table are not supported by EPAS/PG 10.0 and below."
|
||||||
},
|
},
|
||||||
"test_data": {
|
"test_data": {
|
||||||
"is_partitioned": true,
|
"is_partitioned": true,
|
||||||
|
|
@ -434,7 +434,7 @@
|
||||||
"is_positive_test": true,
|
"is_positive_test": true,
|
||||||
"inventory_data": {
|
"inventory_data": {
|
||||||
"server_min_version": 100000,
|
"server_min_version": 100000,
|
||||||
"skip_msg": "Identity columns are not supported by PPAS/PG 10.0 and below."
|
"skip_msg": "Identity columns are not supported by EPAS/PG 10.0 and below."
|
||||||
},
|
},
|
||||||
"test_data": {
|
"test_data": {
|
||||||
"table_name": "abcdefghijklmnopqrstuvwxyz1234567890abcdefghijklmnopqrstuvwxyz123",
|
"table_name": "abcdefghijklmnopqrstuvwxyz1234567890abcdefghijklmnopqrstuvwxyz123",
|
||||||
|
|
@ -493,7 +493,7 @@
|
||||||
"is_positive_test": false,
|
"is_positive_test": false,
|
||||||
"inventory_data": {
|
"inventory_data": {
|
||||||
"server_min_version": 100000,
|
"server_min_version": 100000,
|
||||||
"skip_msg": "Identity columns are not supported by PPAS/PG 10.0 and below."
|
"skip_msg": "Identity columns are not supported by EPAS/PG 10.0 and below."
|
||||||
},
|
},
|
||||||
"test_data": {
|
"test_data": {
|
||||||
"table_name": "",
|
"table_name": "",
|
||||||
|
|
@ -553,7 +553,7 @@
|
||||||
"is_positive_test": false,
|
"is_positive_test": false,
|
||||||
"inventory_data": {
|
"inventory_data": {
|
||||||
"server_min_version": 100000,
|
"server_min_version": 100000,
|
||||||
"skip_msg": "Identity columns are not supported by PPAS/PG 10.0 and below."
|
"skip_msg": "Identity columns are not supported by EPAS/PG 10.0 and below."
|
||||||
},
|
},
|
||||||
"test_data": {
|
"test_data": {
|
||||||
"description": "Create Table API Test",
|
"description": "Create Table API Test",
|
||||||
|
|
@ -1164,7 +1164,7 @@
|
||||||
"inventory_data": {
|
"inventory_data": {
|
||||||
"is_partition": true,
|
"is_partition": true,
|
||||||
"server_min_version": 100000,
|
"server_min_version": 100000,
|
||||||
"skip_msg": "Partitioned table are not supported by PPAS/PG 10.0 and below.",
|
"skip_msg": "Partitioned table are not supported by EPAS/PG 10.0 and below.",
|
||||||
"partition_type": "range",
|
"partition_type": "range",
|
||||||
"mode": "create"
|
"mode": "create"
|
||||||
},
|
},
|
||||||
|
|
@ -1184,7 +1184,7 @@
|
||||||
"inventory_data": {
|
"inventory_data": {
|
||||||
"is_partition": true,
|
"is_partition": true,
|
||||||
"server_min_version": 100000,
|
"server_min_version": 100000,
|
||||||
"skip_msg": "Partitioned table are not supported by PPAS/PG 10.0 and below.",
|
"skip_msg": "Partitioned table are not supported by EPAS/PG 10.0 and below.",
|
||||||
"partition_type": "range",
|
"partition_type": "range",
|
||||||
"mode": "multilevel"
|
"mode": "multilevel"
|
||||||
},
|
},
|
||||||
|
|
@ -1204,7 +1204,7 @@
|
||||||
"inventory_data": {
|
"inventory_data": {
|
||||||
"is_partition": true,
|
"is_partition": true,
|
||||||
"server_min_version": 100000,
|
"server_min_version": 100000,
|
||||||
"skip_msg": "Partitioned table are not supported by PPAS/PG 10.0 and below.",
|
"skip_msg": "Partitioned table are not supported by EPAS/PG 10.0 and below.",
|
||||||
"partition_type": "list",
|
"partition_type": "list",
|
||||||
"mode": "create"
|
"mode": "create"
|
||||||
},
|
},
|
||||||
|
|
@ -1224,7 +1224,7 @@
|
||||||
"inventory_data": {
|
"inventory_data": {
|
||||||
"is_partition": true,
|
"is_partition": true,
|
||||||
"server_min_version": 100000,
|
"server_min_version": 100000,
|
||||||
"skip_msg": "Partitioned table are not supported by PPAS/PG 10.0 and below.",
|
"skip_msg": "Partitioned table are not supported by EPAS/PG 10.0 and below.",
|
||||||
"partition_type": "list",
|
"partition_type": "list",
|
||||||
"mode": "multilevel"
|
"mode": "multilevel"
|
||||||
},
|
},
|
||||||
|
|
@ -1244,7 +1244,7 @@
|
||||||
"inventory_data": {
|
"inventory_data": {
|
||||||
"is_partition": true,
|
"is_partition": true,
|
||||||
"server_min_version": 100000,
|
"server_min_version": 100000,
|
||||||
"skip_msg": "Partitioned table are not supported by PPAS/PG 10.0 and below.",
|
"skip_msg": "Partitioned table are not supported by EPAS/PG 10.0 and below.",
|
||||||
"partition_type": "range",
|
"partition_type": "range",
|
||||||
"mode": "detach"
|
"mode": "detach"
|
||||||
},
|
},
|
||||||
|
|
@ -1264,7 +1264,7 @@
|
||||||
"inventory_data": {
|
"inventory_data": {
|
||||||
"is_partition": true,
|
"is_partition": true,
|
||||||
"server_min_version": 100000,
|
"server_min_version": 100000,
|
||||||
"skip_msg": "Partitioned table are not supported by PPAS/PG 10.0 and below.",
|
"skip_msg": "Partitioned table are not supported by EPAS/PG 10.0 and below.",
|
||||||
"partition_type": "list",
|
"partition_type": "list",
|
||||||
"mode": "detach"
|
"mode": "detach"
|
||||||
},
|
},
|
||||||
|
|
@ -1284,7 +1284,7 @@
|
||||||
"inventory_data": {
|
"inventory_data": {
|
||||||
"is_partition": true,
|
"is_partition": true,
|
||||||
"server_min_version": 100000,
|
"server_min_version": 100000,
|
||||||
"skip_msg": "Partitioned table are not supported by PPAS/PG 10.0 and below.",
|
"skip_msg": "Partitioned table are not supported by EPAS/PG 10.0 and below.",
|
||||||
"partition_type": "range",
|
"partition_type": "range",
|
||||||
"mode": "attach"
|
"mode": "attach"
|
||||||
},
|
},
|
||||||
|
|
@ -1304,7 +1304,7 @@
|
||||||
"inventory_data": {
|
"inventory_data": {
|
||||||
"is_partition": true,
|
"is_partition": true,
|
||||||
"server_min_version": 100000,
|
"server_min_version": 100000,
|
||||||
"skip_msg": "Partitioned table are not supported by PPAS/PG 10.0 and below.",
|
"skip_msg": "Partitioned table are not supported by EPAS/PG 10.0 and below.",
|
||||||
"partition_type": "list",
|
"partition_type": "list",
|
||||||
"mode": "attach"
|
"mode": "attach"
|
||||||
},
|
},
|
||||||
|
|
@ -2145,7 +2145,7 @@
|
||||||
"is_positive_test": true,
|
"is_positive_test": true,
|
||||||
"inventory_data": {
|
"inventory_data": {
|
||||||
"server_min_version": 100000,
|
"server_min_version": 100000,
|
||||||
"skip_msg": "Partitioned table are not supported by PPAS/PG 10.0 and below."
|
"skip_msg": "Partitioned table are not supported by EPAS/PG 10.0 and below."
|
||||||
},
|
},
|
||||||
"test_data": {},
|
"test_data": {},
|
||||||
"mocking_required": false,
|
"mocking_required": false,
|
||||||
|
|
|
||||||
|
|
@ -0,0 +1,587 @@
|
||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
|
||||||
|
##########################################################################
|
||||||
|
#
|
||||||
|
# pgAdmin 4 - PostgreSQL Tools
|
||||||
|
#
|
||||||
|
# Copyright (C) 2013 - 2025, The pgAdmin Development Team
|
||||||
|
# This software is released under the PostgreSQL Licence
|
||||||
|
#
|
||||||
|
##########################################################################
|
||||||
|
|
||||||
|
|
||||||
|
"""Implements Directories for EPAS 13 and above"""
|
||||||
|
|
||||||
|
import json
|
||||||
|
import re
|
||||||
|
from functools import wraps
|
||||||
|
|
||||||
|
from pgadmin.browser.server_groups import servers
|
||||||
|
from flask import render_template, request, jsonify, current_app
|
||||||
|
from flask_babel import gettext
|
||||||
|
from pgadmin.browser.collection import CollectionNodeModule
|
||||||
|
from pgadmin.browser.server_groups.servers.utils import parse_priv_from_db, \
|
||||||
|
parse_priv_to_db
|
||||||
|
from pgadmin.browser.utils import PGChildNodeView
|
||||||
|
from pgadmin.utils.ajax import make_json_response, \
|
||||||
|
make_response as ajax_response, internal_server_error, gone
|
||||||
|
from pgadmin.utils.ajax import precondition_required
|
||||||
|
from pgadmin.utils.driver import get_driver
|
||||||
|
from config import PG_DEFAULT_DRIVER
|
||||||
|
|
||||||
|
|
||||||
|
class DirectoryModule(CollectionNodeModule):
|
||||||
|
"""
|
||||||
|
Module for managing directories.
|
||||||
|
"""
|
||||||
|
_NODE_TYPE = 'directory'
|
||||||
|
_COLLECTION_LABEL = gettext("Directories")
|
||||||
|
|
||||||
|
def __init__(self, import_name, **kwargs):
|
||||||
|
super().__init__(import_name, **kwargs)
|
||||||
|
|
||||||
|
self.min_ver = 130000
|
||||||
|
self.max_ver = None
|
||||||
|
self.server_type = ['ppas']
|
||||||
|
|
||||||
|
def get_nodes(self, gid, sid):
|
||||||
|
"""
|
||||||
|
Generate the collection node
|
||||||
|
"""
|
||||||
|
yield self.generate_browser_collection_node(sid)
|
||||||
|
|
||||||
|
@property
|
||||||
|
def script_load(self):
|
||||||
|
"""
|
||||||
|
Load the module script for server, when any of the server-group node is
|
||||||
|
initialized.
|
||||||
|
"""
|
||||||
|
return servers.ServerModule.node_type
|
||||||
|
|
||||||
|
@property
|
||||||
|
def module_use_template_javascript(self):
|
||||||
|
"""
|
||||||
|
Returns whether Jinja2 template is used for generating the javascript
|
||||||
|
module.
|
||||||
|
"""
|
||||||
|
return False
|
||||||
|
|
||||||
|
@property
|
||||||
|
def node_inode(self):
|
||||||
|
return False
|
||||||
|
|
||||||
|
|
||||||
|
# Register the module as a Blueprint
|
||||||
|
blueprint = DirectoryModule(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
class DirectoryView(PGChildNodeView):
|
||||||
|
node_type = blueprint.node_type
|
||||||
|
|
||||||
|
parent_ids = [
|
||||||
|
{'type': 'int', 'id': 'gid'},
|
||||||
|
{'type': 'int', 'id': 'sid'}
|
||||||
|
]
|
||||||
|
ids = [
|
||||||
|
{'type': 'int', 'id': 'dr_id'}
|
||||||
|
]
|
||||||
|
|
||||||
|
operations = dict({
|
||||||
|
'obj': [
|
||||||
|
{'get': 'properties', 'delete': 'delete', 'put': 'update'},
|
||||||
|
{'get': 'list', 'post': 'create', 'delete': 'delete'}
|
||||||
|
],
|
||||||
|
'nodes': [{'get': 'node'}, {'get': 'nodes'}],
|
||||||
|
'children': [{'get': 'children'}],
|
||||||
|
'sql': [{'get': 'sql'}],
|
||||||
|
'msql': [{'get': 'msql'}, {'get': 'msql'}],
|
||||||
|
})
|
||||||
|
|
||||||
|
def check_precondition(f):
|
||||||
|
"""
|
||||||
|
This function will behave as a decorator which will checks
|
||||||
|
database connection before running view, it will also attaches
|
||||||
|
manager,conn & template_path properties to self
|
||||||
|
"""
|
||||||
|
|
||||||
|
@wraps(f)
|
||||||
|
def wrap(*args, **kwargs):
|
||||||
|
# Here args[0] will hold self & kwargs will hold gid,sid,dr_id
|
||||||
|
self = args[0]
|
||||||
|
self.manager = get_driver(
|
||||||
|
PG_DEFAULT_DRIVER
|
||||||
|
).connection_manager(
|
||||||
|
kwargs['sid']
|
||||||
|
)
|
||||||
|
self.conn = self.manager.connection()
|
||||||
|
self.datistemplate = False
|
||||||
|
if (
|
||||||
|
self.manager.db_info is not None and
|
||||||
|
self.manager.did in self.manager.db_info and
|
||||||
|
'datistemplate' in self.manager.db_info[self.manager.did]
|
||||||
|
):
|
||||||
|
self.datistemplate = self.manager.db_info[
|
||||||
|
self.manager.did]['datistemplate']
|
||||||
|
|
||||||
|
# If DB not connected then return error to browser
|
||||||
|
if not self.conn.connected():
|
||||||
|
current_app.logger.warning(
|
||||||
|
"Connection to the server has been lost."
|
||||||
|
)
|
||||||
|
return precondition_required(
|
||||||
|
gettext(
|
||||||
|
"Connection to the server has been lost."
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
self.template_path = 'directories/sql/#{0}#'.format(
|
||||||
|
self.manager.version
|
||||||
|
)
|
||||||
|
current_app.logger.debug(
|
||||||
|
"Using the template path: %s", self.template_path
|
||||||
|
)
|
||||||
|
# Allowed ACL on directory
|
||||||
|
self.acl = ['W', 'R']
|
||||||
|
|
||||||
|
return f(*args, **kwargs)
|
||||||
|
|
||||||
|
return wrap
|
||||||
|
|
||||||
|
@check_precondition
|
||||||
|
def list(self, gid, sid):
|
||||||
|
SQL = render_template(
|
||||||
|
"/".join([self.template_path, self._PROPERTIES_SQL]),
|
||||||
|
conn=self.conn
|
||||||
|
)
|
||||||
|
status, res = self.conn.execute_dict(SQL)
|
||||||
|
|
||||||
|
if not status:
|
||||||
|
return internal_server_error(errormsg=res)
|
||||||
|
return ajax_response(
|
||||||
|
response=res['rows'],
|
||||||
|
status=200
|
||||||
|
)
|
||||||
|
|
||||||
|
@check_precondition
|
||||||
|
def node(self, gid, sid, dr_id):
|
||||||
|
SQL = render_template(
|
||||||
|
"/".join([self.template_path, self._NODES_SQL]),
|
||||||
|
dr_id=dr_id, conn=self.conn
|
||||||
|
)
|
||||||
|
status, rset = self.conn.execute_2darray(SQL)
|
||||||
|
if not status:
|
||||||
|
return internal_server_error(errormsg=rset)
|
||||||
|
|
||||||
|
if len(rset['rows']) == 0:
|
||||||
|
return gone(gettext("""Could not find the directory."""))
|
||||||
|
|
||||||
|
res = self.blueprint.generate_browser_node(
|
||||||
|
rset['rows'][0]['oid'],
|
||||||
|
sid,
|
||||||
|
rset['rows'][0]['name'],
|
||||||
|
icon="icon-directory"
|
||||||
|
)
|
||||||
|
|
||||||
|
return make_json_response(
|
||||||
|
data=res,
|
||||||
|
status=200
|
||||||
|
)
|
||||||
|
|
||||||
|
@check_precondition
|
||||||
|
def nodes(self, gid, sid, dr_id=None):
|
||||||
|
res = []
|
||||||
|
SQL = render_template(
|
||||||
|
"/".join([self.template_path, self._NODES_SQL]),
|
||||||
|
dr_id=dr_id, conn=self.conn
|
||||||
|
)
|
||||||
|
status, rset = self.conn.execute_2darray(SQL)
|
||||||
|
if not status:
|
||||||
|
return internal_server_error(errormsg=rset)
|
||||||
|
|
||||||
|
for row in rset['rows']:
|
||||||
|
res.append(
|
||||||
|
self.blueprint.generate_browser_node(
|
||||||
|
row['oid'],
|
||||||
|
sid,
|
||||||
|
row['name'],
|
||||||
|
icon="icon-directory",
|
||||||
|
))
|
||||||
|
|
||||||
|
return make_json_response(
|
||||||
|
data=res,
|
||||||
|
status=200
|
||||||
|
)
|
||||||
|
|
||||||
|
def _formatter(self, data, dr_id=None):
|
||||||
|
"""
|
||||||
|
It will return formatted output of collections
|
||||||
|
"""
|
||||||
|
# We need to parse & convert ACL coming from database to json format
|
||||||
|
SQL = render_template(
|
||||||
|
"/".join([self.template_path, self._ACL_SQL]),
|
||||||
|
dr_id=dr_id, conn=self.conn
|
||||||
|
)
|
||||||
|
status, acl = self.conn.execute_dict(SQL)
|
||||||
|
if not status:
|
||||||
|
return internal_server_error(errormsg=acl)
|
||||||
|
|
||||||
|
# We will set get privileges from acl sql so we don't need
|
||||||
|
# it from properties sql
|
||||||
|
data['diracl'] = []
|
||||||
|
|
||||||
|
for row in acl['rows']:
|
||||||
|
priv = parse_priv_from_db(row)
|
||||||
|
if row['deftype'] in data:
|
||||||
|
data[row['deftype']].append(priv)
|
||||||
|
else:
|
||||||
|
data[row['deftype']] = [priv]
|
||||||
|
|
||||||
|
return data
|
||||||
|
|
||||||
|
@check_precondition
|
||||||
|
def properties(self, gid, sid, dr_id):
|
||||||
|
SQL = render_template(
|
||||||
|
"/".join([self.template_path, self._PROPERTIES_SQL]),
|
||||||
|
dr_id=dr_id, conn=self.conn
|
||||||
|
)
|
||||||
|
status, res = self.conn.execute_dict(SQL)
|
||||||
|
if not status:
|
||||||
|
return internal_server_error(errormsg=res)
|
||||||
|
|
||||||
|
if len(res['rows']) == 0:
|
||||||
|
return gone(
|
||||||
|
gettext("""Could not find the directory information.""")
|
||||||
|
)
|
||||||
|
|
||||||
|
# Making copy of output for future use
|
||||||
|
copy_data = dict(res['rows'][0])
|
||||||
|
copy_data = self._formatter(copy_data, dr_id)
|
||||||
|
|
||||||
|
return ajax_response(
|
||||||
|
response=copy_data,
|
||||||
|
status=200
|
||||||
|
)
|
||||||
|
|
||||||
|
@check_precondition
|
||||||
|
def create(self, gid, sid):
|
||||||
|
"""
|
||||||
|
This function will create the new directory object.
|
||||||
|
"""
|
||||||
|
required_args = {
|
||||||
|
'name': 'Name',
|
||||||
|
'path': 'Location'
|
||||||
|
}
|
||||||
|
|
||||||
|
data = request.form if request.form else json.loads(
|
||||||
|
request.data
|
||||||
|
)
|
||||||
|
|
||||||
|
for arg in required_args:
|
||||||
|
if arg not in data:
|
||||||
|
return make_json_response(
|
||||||
|
status=410,
|
||||||
|
success=0,
|
||||||
|
errormsg=gettext(
|
||||||
|
"Could not find the required parameter ({})."
|
||||||
|
).format(arg)
|
||||||
|
)
|
||||||
|
|
||||||
|
# To format privileges coming from client
|
||||||
|
if 'diracl' in data:
|
||||||
|
data['diracl'] = parse_priv_to_db(data['diracl'], self.acl)
|
||||||
|
|
||||||
|
try:
|
||||||
|
SQL = render_template(
|
||||||
|
"/".join([self.template_path, self._CREATE_SQL]),
|
||||||
|
data=data, conn=self.conn
|
||||||
|
)
|
||||||
|
|
||||||
|
status, res = self.conn.execute_scalar(SQL)
|
||||||
|
|
||||||
|
if not status:
|
||||||
|
return internal_server_error(errormsg=res)
|
||||||
|
|
||||||
|
# To fetch the oid of newly created directory
|
||||||
|
SQL = render_template(
|
||||||
|
"/".join([self.template_path, self._ALTER_SQL]),
|
||||||
|
directory=data['name'], conn=self.conn
|
||||||
|
)
|
||||||
|
|
||||||
|
status, dr_id = self.conn.execute_scalar(SQL)
|
||||||
|
|
||||||
|
if not status:
|
||||||
|
return internal_server_error(errormsg=dr_id)
|
||||||
|
|
||||||
|
SQL = render_template(
|
||||||
|
"/".join([self.template_path, self._ALTER_SQL]),
|
||||||
|
data=data, conn=self.conn
|
||||||
|
)
|
||||||
|
|
||||||
|
# Checking if we are not executing empty query
|
||||||
|
if SQL and SQL.strip('\n') and SQL.strip(' '):
|
||||||
|
status, res = self.conn.execute_scalar(SQL)
|
||||||
|
if not status:
|
||||||
|
return jsonify(
|
||||||
|
node=self.blueprint.generate_browser_node(
|
||||||
|
dr_id,
|
||||||
|
sid,
|
||||||
|
data['name'],
|
||||||
|
icon="icon-directory"
|
||||||
|
),
|
||||||
|
success=0,
|
||||||
|
errormsg=gettext(
|
||||||
|
'Directory created successfully.'
|
||||||
|
),
|
||||||
|
info=gettext(
|
||||||
|
res
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
return jsonify(
|
||||||
|
node=self.blueprint.generate_browser_node(
|
||||||
|
dr_id,
|
||||||
|
sid,
|
||||||
|
data['name'],
|
||||||
|
icon="icon-directory",
|
||||||
|
)
|
||||||
|
)
|
||||||
|
except Exception as e:
|
||||||
|
current_app.logger.exception(e)
|
||||||
|
return internal_server_error(errormsg=str(e))
|
||||||
|
|
||||||
|
@check_precondition
|
||||||
|
def update(self, gid, sid, dr_id):
|
||||||
|
"""
|
||||||
|
This function will update directory object
|
||||||
|
"""
|
||||||
|
data = request.form if request.form else json.loads(
|
||||||
|
request.data
|
||||||
|
)
|
||||||
|
|
||||||
|
try:
|
||||||
|
SQL, name = self.get_sql(gid, sid, data, dr_id)
|
||||||
|
# Most probably this is due to error
|
||||||
|
if not isinstance(SQL, str):
|
||||||
|
return SQL
|
||||||
|
|
||||||
|
SQL = SQL.strip('\n').strip(' ')
|
||||||
|
status, res = self.conn.execute_scalar(SQL)
|
||||||
|
if not status:
|
||||||
|
return internal_server_error(errormsg=res)
|
||||||
|
|
||||||
|
return jsonify(
|
||||||
|
node=self.blueprint.generate_browser_node(
|
||||||
|
dr_id,
|
||||||
|
sid,
|
||||||
|
name,
|
||||||
|
icon="icon-%s" % self.node_type,
|
||||||
|
)
|
||||||
|
)
|
||||||
|
except Exception as e:
|
||||||
|
current_app.logger.exception(e)
|
||||||
|
return internal_server_error(errormsg=str(e))
|
||||||
|
|
||||||
|
@check_precondition
|
||||||
|
def delete(self, gid, sid, dr_id=None):
|
||||||
|
"""
|
||||||
|
This function will drop the directory object
|
||||||
|
"""
|
||||||
|
if dr_id is None:
|
||||||
|
data = request.form if request.form else json.loads(
|
||||||
|
request.data
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
data = {'ids': [dr_id]}
|
||||||
|
|
||||||
|
try:
|
||||||
|
for dr_id in data['ids']:
|
||||||
|
SQL = render_template(
|
||||||
|
"/".join([self.template_path, self._NODES_SQL]),
|
||||||
|
dr_id=dr_id, conn=self.conn
|
||||||
|
)
|
||||||
|
# Get name for directory from dr_id
|
||||||
|
status, rset = self.conn.execute_dict(SQL)
|
||||||
|
|
||||||
|
if not status:
|
||||||
|
return internal_server_error(errormsg=rset)
|
||||||
|
|
||||||
|
if not rset['rows']:
|
||||||
|
return make_json_response(
|
||||||
|
success=0,
|
||||||
|
errormsg=gettext(
|
||||||
|
'Error: Object not found.'
|
||||||
|
),
|
||||||
|
info=gettext(
|
||||||
|
'The specified directory could not be found.\n'
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
# drop directory
|
||||||
|
SQL = render_template(
|
||||||
|
"/".join([self.template_path, self._DELETE_SQL]),
|
||||||
|
dr_name=(rset['rows'][0])['name'], conn=self.conn
|
||||||
|
)
|
||||||
|
|
||||||
|
status, res = self.conn.execute_scalar(SQL)
|
||||||
|
if not status:
|
||||||
|
return internal_server_error(errormsg=res)
|
||||||
|
|
||||||
|
return make_json_response(
|
||||||
|
success=1,
|
||||||
|
info=gettext("Directory dropped")
|
||||||
|
)
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
current_app.logger.exception(e)
|
||||||
|
return internal_server_error(errormsg=str(e))
|
||||||
|
|
||||||
|
@check_precondition
|
||||||
|
def msql(self, gid, sid, dr_id=None):
|
||||||
|
"""
|
||||||
|
This function to return modified SQL
|
||||||
|
"""
|
||||||
|
data = dict()
|
||||||
|
for k, v in request.args.items():
|
||||||
|
try:
|
||||||
|
data[k] = json.loads(v)
|
||||||
|
except ValueError:
|
||||||
|
data[k] = v
|
||||||
|
|
||||||
|
sql, _ = self.get_sql(gid, sid, data, dr_id)
|
||||||
|
# Most probably this is due to error
|
||||||
|
if not isinstance(sql, str):
|
||||||
|
return sql
|
||||||
|
|
||||||
|
sql = sql.strip('\n').strip(' ')
|
||||||
|
if sql == '':
|
||||||
|
sql = "--modified SQL"
|
||||||
|
return make_json_response(
|
||||||
|
data=sql,
|
||||||
|
status=200
|
||||||
|
)
|
||||||
|
|
||||||
|
def _format_privilege_data(self, data):
|
||||||
|
for key in ['diracl']:
|
||||||
|
if key in data and data[key] is not None:
|
||||||
|
if 'added' in data[key]:
|
||||||
|
data[key]['added'] = parse_priv_to_db(
|
||||||
|
data[key]['added'], self.acl
|
||||||
|
)
|
||||||
|
if 'changed' in data[key]:
|
||||||
|
data[key]['changed'] = parse_priv_to_db(
|
||||||
|
data[key]['changed'], self.acl
|
||||||
|
)
|
||||||
|
if 'deleted' in data[key]:
|
||||||
|
data[key]['deleted'] = parse_priv_to_db(
|
||||||
|
data[key]['deleted'], self.acl
|
||||||
|
)
|
||||||
|
|
||||||
|
def get_sql(self, gid, sid, data, dr_id=None):
|
||||||
|
"""
|
||||||
|
This function will generate sql from model/properties data
|
||||||
|
"""
|
||||||
|
required_args = [
|
||||||
|
'name'
|
||||||
|
]
|
||||||
|
|
||||||
|
if dr_id is not None:
|
||||||
|
SQL = render_template(
|
||||||
|
"/".join([self.template_path, self._PROPERTIES_SQL]),
|
||||||
|
dr_id=dr_id, conn=self.conn
|
||||||
|
)
|
||||||
|
status, res = self.conn.execute_dict(SQL)
|
||||||
|
if not status:
|
||||||
|
return internal_server_error(errormsg=res)
|
||||||
|
|
||||||
|
if len(res['rows']) == 0:
|
||||||
|
return gone(
|
||||||
|
gettext("Could not find the directory on the server.")
|
||||||
|
)
|
||||||
|
|
||||||
|
# Making copy of output for further processing
|
||||||
|
old_data = dict(res['rows'][0])
|
||||||
|
old_data = self._formatter(old_data, dr_id)
|
||||||
|
|
||||||
|
# To format privileges data coming from client
|
||||||
|
self._format_privilege_data(data)
|
||||||
|
|
||||||
|
# If name is not present with in update data then copy it
|
||||||
|
# from old data
|
||||||
|
for arg in required_args:
|
||||||
|
if arg not in data:
|
||||||
|
data[arg] = old_data[arg]
|
||||||
|
|
||||||
|
SQL = render_template(
|
||||||
|
"/".join([self.template_path, self._UPDATE_SQL]),
|
||||||
|
data=data, o_data=old_data, conn=self.conn
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
# To format privileges coming from client
|
||||||
|
if 'diracl' in data:
|
||||||
|
data['diracl'] = parse_priv_to_db(data['diracl'], self.acl)
|
||||||
|
# If the request for new object which do not have dr_id
|
||||||
|
SQL = render_template(
|
||||||
|
"/".join([self.template_path, self._CREATE_SQL]),
|
||||||
|
data=data, conn=self.conn
|
||||||
|
)
|
||||||
|
SQL += "\n"
|
||||||
|
SQL += render_template(
|
||||||
|
"/".join([self.template_path, self._ALTER_SQL]),
|
||||||
|
data=data, conn=self.conn
|
||||||
|
)
|
||||||
|
SQL = re.sub('\n{2,}', '\n\n', SQL)
|
||||||
|
return SQL, data['name'] if 'name' in data else old_data['name']
|
||||||
|
|
||||||
|
@check_precondition
|
||||||
|
def sql(self, gid, sid, dr_id):
|
||||||
|
"""
|
||||||
|
This function will generate sql for sql panel
|
||||||
|
"""
|
||||||
|
SQL = render_template(
|
||||||
|
"/".join([self.template_path, self._PROPERTIES_SQL]),
|
||||||
|
dr_id=dr_id, conn=self.conn
|
||||||
|
)
|
||||||
|
status, res = self.conn.execute_dict(SQL)
|
||||||
|
if not status:
|
||||||
|
return internal_server_error(errormsg=res)
|
||||||
|
|
||||||
|
if len(res['rows']) == 0:
|
||||||
|
return gone(
|
||||||
|
gettext("Could not find the directory on the server.")
|
||||||
|
)
|
||||||
|
# Making copy of output for future use
|
||||||
|
old_data = dict(res['rows'][0])
|
||||||
|
|
||||||
|
old_data = self._formatter(old_data, dr_id)
|
||||||
|
|
||||||
|
# To format privileges
|
||||||
|
if 'diracl' in old_data:
|
||||||
|
old_data['diracl'] = parse_priv_to_db(old_data['diracl'], self.acl)
|
||||||
|
|
||||||
|
SQL = ''
|
||||||
|
# We are not showing create sql for system directory.
|
||||||
|
if not old_data['name'].startswith('pg_'):
|
||||||
|
SQL = render_template(
|
||||||
|
"/".join([self.template_path, self._CREATE_SQL]),
|
||||||
|
data=old_data, conn=self.conn
|
||||||
|
)
|
||||||
|
SQL += "\n"
|
||||||
|
SQL += render_template(
|
||||||
|
"/".join([self.template_path, self._ALTER_SQL]),
|
||||||
|
data=old_data, conn=self.conn
|
||||||
|
)
|
||||||
|
|
||||||
|
sql_header = """
|
||||||
|
-- Directory: {0}
|
||||||
|
|
||||||
|
-- DROP DIRECTORY IF EXISTS {0};
|
||||||
|
|
||||||
|
""".format(old_data['name'])
|
||||||
|
|
||||||
|
SQL = sql_header + SQL
|
||||||
|
SQL = re.sub('\n{2,}', '\n\n', SQL)
|
||||||
|
return ajax_response(response=SQL.strip('\n'))
|
||||||
|
|
||||||
|
|
||||||
|
# Register the view with the blueprint
|
||||||
|
DirectoryView.register_node_view(blueprint)
|
||||||
|
|
@ -0,0 +1 @@
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 16 16"><defs><style>.cls-1{fill:#f5f47e;stroke:#caa524;stroke-linecap:round;stroke-linejoin:round;stroke-width:0.75px;}</style></defs><title>coll-tablespace</title><g id="_2" data-name="2"><polygon class="cls-1" points="6.5 4.5 2.5 4.5 2.5 2.5 5.63 2.5 6.5 3.12 6.5 4.5"/><rect class="cls-1" x="2.5" y="4.5" width="9" height="5"/><rect class="cls-1" x="4.5" y="8.5" width="9" height="5"/><polygon class="cls-1" points="8.5 8.5 4.5 8.5 4.5 6.5 7.63 6.5 8.5 7.12 8.5 8.5"/></g></svg>
|
||||||
|
After Width: | Height: | Size: 534 B |
|
|
@ -0,0 +1 @@
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 16 16"><defs><style>.cls-1{fill:#f5f47e;stroke:#caa524;stroke-linecap:round;stroke-linejoin:round;stroke-width:0.75px;}</style></defs><title>tablespace</title><g id="_2" data-name="2"><rect class="cls-1" x="3" y="5.75" width="10" height="6.5"/><polygon class="cls-1" points="8.5 5.75 3 5.75 3 3.75 7.19 3.75 8.5 4.68 8.5 5.75"/></g></svg>
|
||||||
|
After Width: | Height: | Size: 391 B |
|
|
@ -0,0 +1,100 @@
|
||||||
|
/////////////////////////////////////////////////////////////
|
||||||
|
//
|
||||||
|
// pgAdmin 4 - PostgreSQL Tools
|
||||||
|
//
|
||||||
|
// Copyright (C) 2013 - 2025, The pgAdmin Development Team
|
||||||
|
// This software is released under the PostgreSQL Licence
|
||||||
|
//
|
||||||
|
//////////////////////////////////////////////////////////////
|
||||||
|
|
||||||
|
import { getNodeListByName } from '../../../../../static/js/node_ajax';
|
||||||
|
import { getNodePrivilegeRoleSchema } from '../../../static/js/privilege.ui';
|
||||||
|
import DirectorySchema from './directory.ui';
|
||||||
|
|
||||||
|
define('pgadmin.node.directory', [
|
||||||
|
'sources/gettext', 'sources/url_for',
|
||||||
|
'pgadmin.browser', 'pgadmin.browser.collection',
|
||||||
|
], function(
|
||||||
|
gettext, url_for, pgBrowser
|
||||||
|
) {
|
||||||
|
|
||||||
|
if (!pgBrowser.Nodes['coll-directory']) {
|
||||||
|
pgBrowser.Nodes['coll-directory'] =
|
||||||
|
pgBrowser.Collection.extend({
|
||||||
|
node: 'directory',
|
||||||
|
label: gettext('Directories'),
|
||||||
|
type: 'coll-directory',
|
||||||
|
columns: ['name', 'diruser'],
|
||||||
|
canDrop: true,
|
||||||
|
canDropCascade: false,
|
||||||
|
});
|
||||||
|
}
|
||||||
|
if (!pgBrowser.Nodes['directory']) {
|
||||||
|
pgBrowser.Nodes['directory'] = pgBrowser.Node.extend({
|
||||||
|
parent_type: 'server',
|
||||||
|
type: 'directory',
|
||||||
|
epasHelp: true,
|
||||||
|
dialogHelp: url_for('help.static', {'filename': 'directory_dialog.html'}),
|
||||||
|
label: gettext('Directory'),
|
||||||
|
hasSQL: true,
|
||||||
|
canDrop: true,
|
||||||
|
Init: function() {
|
||||||
|
/* Avoid multiple registration of menus */
|
||||||
|
if (this.initialized)
|
||||||
|
return;
|
||||||
|
|
||||||
|
this.initialized = true;
|
||||||
|
|
||||||
|
pgBrowser.add_menus([{
|
||||||
|
name: 'create_directory_on_server', node: 'server', module: this,
|
||||||
|
applies: ['object', 'context'], callback: 'show_obj_properties',
|
||||||
|
category: 'create', priority: 4, label: gettext('Directory...'),
|
||||||
|
data: {action: 'create',
|
||||||
|
data_disabled: gettext('This option is only available on EPAS servers.')},
|
||||||
|
/* Function is used to check the server type and version.
|
||||||
|
* Directories only supported in EPAS 13 and above.
|
||||||
|
*/
|
||||||
|
enable: function(node, item) {
|
||||||
|
let treeData = pgBrowser.tree.getTreeNodeHierarchy(item),
|
||||||
|
server = treeData['server'];
|
||||||
|
return server.connected && node.server_type === 'ppas' &&
|
||||||
|
node.version >= 130000;
|
||||||
|
},
|
||||||
|
},{
|
||||||
|
name: 'create_directory_on_coll', node: 'coll-directory', module: this,
|
||||||
|
applies: ['object', 'context'], callback: 'show_obj_properties',
|
||||||
|
category: 'create', priority: 4, label: gettext('Directory...'),
|
||||||
|
data: {action: 'create',
|
||||||
|
data_disabled: gettext('This option is only available on EPAS servers.')},
|
||||||
|
},{
|
||||||
|
name: 'create_directory', node: 'directory', module: this,
|
||||||
|
applies: ['object', 'context'], callback: 'show_obj_properties',
|
||||||
|
category: 'create', priority: 4, label: gettext('Directory...'),
|
||||||
|
data: {action: 'create',
|
||||||
|
data_disabled: gettext('This option is only available on EPAS servers.')},
|
||||||
|
},
|
||||||
|
]);
|
||||||
|
},
|
||||||
|
can_create_directory: function(node, item) {
|
||||||
|
let treeData = pgBrowser.tree.getTreeNodeHierarchy(item),
|
||||||
|
server = treeData['server'];
|
||||||
|
return server.connected && server.user.is_superuser;
|
||||||
|
},
|
||||||
|
|
||||||
|
getSchema: function(treeNodeInfo, itemNodeData) {
|
||||||
|
return new DirectorySchema(
|
||||||
|
(privileges)=>getNodePrivilegeRoleSchema(this, treeNodeInfo, itemNodeData, privileges),
|
||||||
|
treeNodeInfo,
|
||||||
|
{
|
||||||
|
role: ()=>getNodeListByName('role', treeNodeInfo, itemNodeData),
|
||||||
|
},
|
||||||
|
{
|
||||||
|
diruser: pgBrowser.serverInfo[treeNodeInfo.server._id].user.name,
|
||||||
|
},
|
||||||
|
);
|
||||||
|
},
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
return pgBrowser.Nodes['coll-directory'];
|
||||||
|
});
|
||||||
|
|
@ -0,0 +1,86 @@
|
||||||
|
/////////////////////////////////////////////////////////////
|
||||||
|
//
|
||||||
|
// pgAdmin 4 - PostgreSQL Tools
|
||||||
|
//
|
||||||
|
// Copyright (C) 2013 - 2025, The pgAdmin Development Team
|
||||||
|
// This software is released under the PostgreSQL Licence
|
||||||
|
//
|
||||||
|
//////////////////////////////////////////////////////////////
|
||||||
|
|
||||||
|
import gettext from 'sources/gettext';
|
||||||
|
import BaseUISchema from 'sources/SchemaView/base_schema.ui';
|
||||||
|
import { isEmptyString } from '../../../../../../static/js/validators';
|
||||||
|
|
||||||
|
export default class DirectorySchema extends BaseUISchema {
|
||||||
|
constructor(getPrivilegeRoleSchema, treeNodeInfo, fieldOptions={}, initValues={}) {
|
||||||
|
super({
|
||||||
|
name: undefined,
|
||||||
|
owner: undefined,
|
||||||
|
path: undefined,
|
||||||
|
diracl: [],
|
||||||
|
...initValues,
|
||||||
|
});
|
||||||
|
this.getPrivilegeRoleSchema = getPrivilegeRoleSchema;
|
||||||
|
this.fieldOptions = {
|
||||||
|
role: [],
|
||||||
|
...fieldOptions,
|
||||||
|
};
|
||||||
|
this.treeNodeInfo = treeNodeInfo;
|
||||||
|
}
|
||||||
|
|
||||||
|
get idAttribute() {
|
||||||
|
return 'oid';
|
||||||
|
}
|
||||||
|
|
||||||
|
get baseFields() {
|
||||||
|
let obj = this;
|
||||||
|
let fields = [
|
||||||
|
{
|
||||||
|
id: 'name', label: gettext('Name'), cell: 'text',
|
||||||
|
type: 'text', mode: ['properties', 'create', 'edit'],
|
||||||
|
noEmpty: true, editable: false,
|
||||||
|
readonly: function(state) {return !obj.isNew(state); }
|
||||||
|
}, {
|
||||||
|
id: 'oid', label: gettext('OID'), cell: 'text',
|
||||||
|
type: 'text', mode: ['properties'],
|
||||||
|
}, {
|
||||||
|
id: 'diruser', label: gettext('Owner'), cell: 'text',
|
||||||
|
editable: false, type: 'select', options: this.fieldOptions.role,
|
||||||
|
controlProps: { allowClear: false }, isCollectionProperty: true
|
||||||
|
},{
|
||||||
|
id: 'path', label: gettext('Location'),
|
||||||
|
noEmpty: true, editable: false,
|
||||||
|
group: gettext('Definition'), type: 'text',
|
||||||
|
mode: ['properties', 'edit','create'],
|
||||||
|
readonly: function(state) {return !obj.isNew(state); },
|
||||||
|
},
|
||||||
|
];
|
||||||
|
// Check server version before adding version-specific security fields
|
||||||
|
if (this.treeNodeInfo?.server?.version >= 170000) {
|
||||||
|
fields.push({
|
||||||
|
id: 'diracl', label: gettext('Privileges'), type: 'collection',
|
||||||
|
group: gettext('Security'),
|
||||||
|
schema: this.getPrivilegeRoleSchema(['R','W']),
|
||||||
|
mode: ['edit', 'create'], uniqueCol : ['grantee'],
|
||||||
|
canAdd: true, canDelete: true,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
id: 'acl', label: gettext('Privileges'), type: 'text',
|
||||||
|
group: gettext('Security'), mode: ['properties'],
|
||||||
|
},
|
||||||
|
);
|
||||||
|
}
|
||||||
|
return fields;
|
||||||
|
}
|
||||||
|
|
||||||
|
validate(state, setError) {
|
||||||
|
let errmsg = null;
|
||||||
|
|
||||||
|
if (this.isNew() && isEmptyString(state.path)) {
|
||||||
|
errmsg = gettext('\'Location\' cannot be empty.');
|
||||||
|
setError('path', errmsg);
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
@ -0,0 +1,32 @@
|
||||||
|
{### SQL to fetch privileges for directories ###}
|
||||||
|
SELECT 'diracl' AS deftype,
|
||||||
|
COALESCE(grantee.rolname, 'PUBLIC') AS grantee,
|
||||||
|
grantor.rolname AS grantor,
|
||||||
|
ARRAY_AGG(privilege_type) AS privileges,
|
||||||
|
ARRAY_AGG(is_grantable) AS grantable
|
||||||
|
FROM (
|
||||||
|
SELECT
|
||||||
|
acl.grantee, acl.grantor, acl.is_grantable,
|
||||||
|
CASE acl.privilege_type
|
||||||
|
WHEN 'SELECT' THEN 'R'
|
||||||
|
WHEN 'UPDATE' THEN 'W'
|
||||||
|
ELSE 'UNKNOWN'
|
||||||
|
END AS privilege_type
|
||||||
|
FROM (
|
||||||
|
SELECT (d).grantee AS grantee,
|
||||||
|
(d).grantor AS grantor,
|
||||||
|
(d).is_grantable AS is_grantable,
|
||||||
|
(d).privilege_type AS privilege_type
|
||||||
|
FROM (
|
||||||
|
SELECT pg_catalog.aclexplode(ed.diracl) AS d
|
||||||
|
FROM pg_catalog.edb_dir ed
|
||||||
|
{% if dr_id %}
|
||||||
|
WHERE ed.oid = {{ dr_id|qtLiteral(conn) }}::OID
|
||||||
|
{% endif %}
|
||||||
|
) acl_exploded
|
||||||
|
) acl
|
||||||
|
) acl_final
|
||||||
|
LEFT JOIN pg_catalog.pg_roles grantor ON (acl_final.grantor = grantor.oid)
|
||||||
|
LEFT JOIN pg_catalog.pg_roles grantee ON (acl_final.grantee = grantee.oid)
|
||||||
|
GROUP BY grantor.rolname, grantee.rolname
|
||||||
|
ORDER BY grantee;
|
||||||
|
|
@ -0,0 +1,22 @@
|
||||||
|
{### SQL to alter directory ###}
|
||||||
|
{% import 'macros/privilege.macros' as PRIVILEGE %}
|
||||||
|
{% if data %}
|
||||||
|
{### Owner on directory ###}
|
||||||
|
{% if data.diruser %}
|
||||||
|
ALTER DIRECTORY {{ conn|qtIdent(data.name) }}
|
||||||
|
OWNER TO {{ conn|qtIdent(data.diruser) }};
|
||||||
|
{% endif %}
|
||||||
|
|
||||||
|
{### ACL on directory ###}
|
||||||
|
{% if data.diracl %}
|
||||||
|
{% for priv in data.diracl %}
|
||||||
|
{{ PRIVILEGE.APPLY(conn, 'DIRECTORY', priv.grantee, data.name, priv.without_grant, priv.with_grant) }}
|
||||||
|
{% endfor %}
|
||||||
|
{% endif %}
|
||||||
|
|
||||||
|
{% endif %}
|
||||||
|
|
||||||
|
{# ======== The SQl Below will fetch id for given dataspace ======== #}
|
||||||
|
{% if directory %}
|
||||||
|
SELECT dir.oid FROM pg_catalog.edb_dir dir WHERE dirname = {{directory|qtLiteral(conn)}};
|
||||||
|
{% endif %}
|
||||||
|
|
@ -0,0 +1,4 @@
|
||||||
|
{### SQL to create directory object ###}
|
||||||
|
{% if data %}
|
||||||
|
CREATE DIRECTORY {{ conn|qtIdent(data.name) }} AS {{ data.path|qtLiteral(conn) }};
|
||||||
|
{% endif %}
|
||||||
|
|
@ -0,0 +1,2 @@
|
||||||
|
{### SQL to delete directory object ###}
|
||||||
|
DROP DIRECTORY IF EXISTS {{ conn|qtIdent(dr_name) }};
|
||||||
|
|
@ -0,0 +1,12 @@
|
||||||
|
SELECT
|
||||||
|
dir.oid AS oid,
|
||||||
|
dirname AS name,
|
||||||
|
dirowner AS owner,
|
||||||
|
dirpath AS path
|
||||||
|
FROM
|
||||||
|
pg_catalog.edb_dir dir
|
||||||
|
{% if dr_id %}
|
||||||
|
WHERE
|
||||||
|
dir.oid={{ dr_id|qtLiteral(conn) }}::OID
|
||||||
|
{% endif %}
|
||||||
|
ORDER BY name;
|
||||||
|
|
@ -0,0 +1,13 @@
|
||||||
|
{### SQL to fetch directory object properties ###}
|
||||||
|
SELECT
|
||||||
|
dir.oid,
|
||||||
|
dirname AS name,
|
||||||
|
pg_catalog.pg_get_userbyid(dirowner) as diruser,
|
||||||
|
dirpath AS path,
|
||||||
|
pg_catalog.array_to_string(diracl::text[], ', ') as acl
|
||||||
|
FROM
|
||||||
|
pg_catalog.edb_dir dir
|
||||||
|
{% if dr_id %}
|
||||||
|
WHERE dir.oid={{ dr_id|qtLiteral(conn) }}::OID
|
||||||
|
{% endif %}
|
||||||
|
ORDER BY name;
|
||||||
|
|
@ -0,0 +1,37 @@
|
||||||
|
{### SQL to update directory object ###}
|
||||||
|
{% import 'macros/privilege.macros' as PRIVILEGE %}
|
||||||
|
{% if data %}
|
||||||
|
|
||||||
|
{# ==== To update directory name ==== #}
|
||||||
|
{% if data.name and data.name != o_data.name %}
|
||||||
|
ALTER DIRECTORY {{ conn|qtIdent(o_data.name) }}
|
||||||
|
RENAME TO {{ conn|qtIdent(data.name) }};
|
||||||
|
{% endif %}
|
||||||
|
|
||||||
|
{# ==== To update OWNER name ==== #}
|
||||||
|
{% if data.diruser %}
|
||||||
|
ALTER DIRECTORY {{ conn|qtIdent(data.name) }} OWNER TO {{ conn|qtIdent(data.diruser) }};
|
||||||
|
{% endif %}
|
||||||
|
|
||||||
|
{# ==== To update directory privileges ==== #}
|
||||||
|
{# Change the privileges #}
|
||||||
|
{% if data.diracl %}
|
||||||
|
{% if 'deleted' in data.diracl %}
|
||||||
|
{% for priv in data.diracl.deleted %}
|
||||||
|
{{ PRIVILEGE.RESETALL(conn, 'DIRECTORY', priv.grantee, data.name) }}
|
||||||
|
{% endfor %}
|
||||||
|
{% endif %}
|
||||||
|
{% if 'changed' in data.diracl %}
|
||||||
|
{% for priv in data.diracl.changed %}
|
||||||
|
{{ PRIVILEGE.RESETALL(conn, 'DIRECTORY', priv.grantee, data.name) }}
|
||||||
|
{{ PRIVILEGE.APPLY(conn, 'DIRECTORY', priv.grantee, data.name, priv.without_grant, priv.with_grant) }}
|
||||||
|
{% endfor %}
|
||||||
|
{% endif %}
|
||||||
|
{% if 'added' in data.diracl %}
|
||||||
|
{% for priv in data.diracl.added %}
|
||||||
|
{{ PRIVILEGE.APPLY(conn, 'DIRECTORY', priv.grantee, data.name, priv.without_grant, priv.with_grant) }}
|
||||||
|
{% endfor %}
|
||||||
|
{% endif %}
|
||||||
|
|
||||||
|
{% endif %}
|
||||||
|
{% endif %}
|
||||||
|
|
@ -0,0 +1,15 @@
|
||||||
|
##########################################################################
|
||||||
|
#
|
||||||
|
# pgAdmin 4 - PostgreSQL Tools
|
||||||
|
#
|
||||||
|
# Copyright (C) 2013 - 2025, The pgAdmin Development Team
|
||||||
|
# This software is released under the PostgreSQL Licence
|
||||||
|
#
|
||||||
|
##########################################################################
|
||||||
|
|
||||||
|
from pgadmin.utils.route import BaseTestGenerator
|
||||||
|
|
||||||
|
|
||||||
|
class DirectoriesCreateTestCase(BaseTestGenerator):
|
||||||
|
def runTest(self):
|
||||||
|
return
|
||||||
|
|
@ -0,0 +1 @@
|
||||||
|
ALTER DIRECTORY "Dir1_$%{}[]()&*^!@""'`\/#" OWNER TO enterprisedb;
|
||||||
|
|
@ -0,0 +1,8 @@
|
||||||
|
-- Directory: Dir1_$%{}[]()&*^!@"'`\/#
|
||||||
|
|
||||||
|
-- DROP DIRECTORY IF EXISTS Dir1_$%{}[]()&*^!@"'`\/#;
|
||||||
|
|
||||||
|
CREATE DIRECTORY "Dir1_$%{}[]()&*^!@""'`\/#" AS '/home/test_dir';
|
||||||
|
|
||||||
|
ALTER DIRECTORY "Dir1_$%{}[]()&*^!@""'`\/#"
|
||||||
|
OWNER TO enterprisedb;
|
||||||
|
|
@ -0,0 +1,4 @@
|
||||||
|
CREATE DIRECTORY "Dir1_$%{}[]()&*^!@""'`\/#" AS '/home/test_dir';
|
||||||
|
|
||||||
|
ALTER DIRECTORY "Dir1_$%{}[]()&*^!@""'`\/#"
|
||||||
|
OWNER TO enterprisedb;
|
||||||
|
|
@ -0,0 +1,8 @@
|
||||||
|
-- Directory: Dir1_$%{}[]()&*^!@"'`\/#
|
||||||
|
|
||||||
|
-- DROP DIRECTORY IF EXISTS Dir1_$%{}[]()&*^!@"'`\/#;
|
||||||
|
|
||||||
|
CREATE DIRECTORY "Dir1_$%{}[]()&*^!@""'`\/#" AS '/home/test_dir';
|
||||||
|
|
||||||
|
ALTER DIRECTORY "Dir1_$%{}[]()&*^!@""'`\/#"
|
||||||
|
OWNER TO enterprisedb;
|
||||||
|
|
@ -0,0 +1,36 @@
|
||||||
|
{
|
||||||
|
"scenarios": [
|
||||||
|
{
|
||||||
|
"type": "create",
|
||||||
|
"name": "Create Directories",
|
||||||
|
"endpoint": "NODE-directory.obj",
|
||||||
|
"sql_endpoint": "NODE-directory.sql_id",
|
||||||
|
"msql_endpoint": "NODE-directory.msql",
|
||||||
|
"data": {
|
||||||
|
"name": "Dir1_$%{}[]()&*^!@\"'`\\/#",
|
||||||
|
"diruser": "enterprisedb",
|
||||||
|
"path": "/home/test_dir"
|
||||||
|
},
|
||||||
|
"expected_sql_file": "create_directory.sql",
|
||||||
|
"expected_msql_file": "create_directory.msql"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"type": "alter",
|
||||||
|
"name": "Alter Directory owner",
|
||||||
|
"endpoint": "NODE-directory.obj_id",
|
||||||
|
"sql_endpoint": "NODE-directory.sql_id",
|
||||||
|
"msql_endpoint": "NODE-directory.msql_id",
|
||||||
|
"data": {
|
||||||
|
"diruser": "enterprisedb"
|
||||||
|
},
|
||||||
|
"expected_sql_file": "alter_directory_owner.sql",
|
||||||
|
"expected_msql_file": "alter_directory_owner.msql"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"type": "delete",
|
||||||
|
"name": "Drop Directories",
|
||||||
|
"endpoint": "NODE-directory.obj_id",
|
||||||
|
"data": {}
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
|
@ -0,0 +1,63 @@
|
||||||
|
##########################################################################
|
||||||
|
#
|
||||||
|
# pgAdmin 4 - PostgreSQL Tools
|
||||||
|
#
|
||||||
|
# Copyright (C) 2013 - 2025, The pgAdmin Development Team
|
||||||
|
# This software is released under the PostgreSQL Licence
|
||||||
|
#
|
||||||
|
##########################################################################
|
||||||
|
|
||||||
|
import json
|
||||||
|
import uuid
|
||||||
|
|
||||||
|
from pgadmin.utils import server_utils
|
||||||
|
from pgadmin.utils.route import BaseTestGenerator
|
||||||
|
from regression import parent_node_dict
|
||||||
|
from regression.python_test_utils import test_utils as utils
|
||||||
|
from . import utils as directories_utils
|
||||||
|
|
||||||
|
|
||||||
|
class DirectoriesAddTestCase(BaseTestGenerator):
|
||||||
|
"""This class will test the add directories API"""
|
||||||
|
scenarios = [
|
||||||
|
('Add Directories', dict(url='/browser/directory/obj/'))
|
||||||
|
]
|
||||||
|
|
||||||
|
def setUp(self):
|
||||||
|
self.server_id = parent_node_dict["server"][-1]["server_id"]
|
||||||
|
server_con = server_utils.connect_server(self, self.server_id)
|
||||||
|
if server_con["info"] != "Server connected.":
|
||||||
|
raise Exception("Could not connect to server to add directory.")
|
||||||
|
if "type" in server_con["data"]:
|
||||||
|
if server_con["data"]["type"] == "pg":
|
||||||
|
message = "Directories are not supported by PG."
|
||||||
|
self.skipTest(message)
|
||||||
|
else:
|
||||||
|
if server_con["data"]["version"] < 130000:
|
||||||
|
message = "Directories are not supported by EPAS 12" \
|
||||||
|
" and below."
|
||||||
|
self.skipTest(message)
|
||||||
|
|
||||||
|
def runTest(self):
|
||||||
|
"""This function will add directories under server node"""
|
||||||
|
self.directory = "test_directory_add%s" % \
|
||||||
|
str(uuid.uuid4())[1:8]
|
||||||
|
data = {
|
||||||
|
"name": self.directory,
|
||||||
|
"path": "/home/test_dir"
|
||||||
|
}
|
||||||
|
response = self.tester.post(self.url + str(utils.SERVER_GROUP) +
|
||||||
|
"/" + str(self.server_id) + "/",
|
||||||
|
data=json.dumps(data),
|
||||||
|
content_type='html/json')
|
||||||
|
self.assertEqual(response.status_code, 200)
|
||||||
|
|
||||||
|
def tearDown(self):
|
||||||
|
"""This function delete the directory from the database."""
|
||||||
|
connection = utils.get_db_connection(self.server['db'],
|
||||||
|
self.server['username'],
|
||||||
|
self.server['db_password'],
|
||||||
|
self.server['host'],
|
||||||
|
self.server['port'],
|
||||||
|
self.server['sslmode'])
|
||||||
|
directories_utils.delete_directories(connection, self.directory)
|
||||||
|
|
@ -0,0 +1,65 @@
|
||||||
|
##########################################################################
|
||||||
|
#
|
||||||
|
# pgAdmin 4 - PostgreSQL Tools
|
||||||
|
#
|
||||||
|
# Copyright (C) 2013 - 2025, The pgAdmin Development Team
|
||||||
|
# This software is released under the PostgreSQL Licence
|
||||||
|
#
|
||||||
|
##########################################################################
|
||||||
|
|
||||||
|
import uuid
|
||||||
|
|
||||||
|
from pgadmin.utils import server_utils
|
||||||
|
from pgadmin.utils.route import BaseTestGenerator
|
||||||
|
from regression import parent_node_dict
|
||||||
|
from regression.python_test_utils import test_utils as utils
|
||||||
|
from . import utils as directories_utils
|
||||||
|
|
||||||
|
|
||||||
|
class DirectoriesDeleteTestCase(BaseTestGenerator):
|
||||||
|
"""This class will delete the Directory"""
|
||||||
|
scenarios = [
|
||||||
|
('Delete Directory', dict(url='/browser/directory/obj/'))
|
||||||
|
]
|
||||||
|
|
||||||
|
def setUp(self):
|
||||||
|
self.server_id = parent_node_dict["server"][-1]["server_id"]
|
||||||
|
server_response = server_utils.connect_server(self, self.server_id)
|
||||||
|
if server_response["info"] != "Server connected.":
|
||||||
|
raise Exception("Could not connect to server to add directories.")
|
||||||
|
if "type" in server_response["data"]:
|
||||||
|
if server_response["data"]["type"] == "pg":
|
||||||
|
message = "Directories are not supported by PG."
|
||||||
|
self.skipTest(message)
|
||||||
|
else:
|
||||||
|
if server_response["data"]["version"] < 130000:
|
||||||
|
message = "Directories are not supported by EPAS " \
|
||||||
|
"12 and below."
|
||||||
|
self.skipTest(message)
|
||||||
|
self.directory_name = "test_directory_delete%s" % \
|
||||||
|
str(uuid.uuid4())[1:8]
|
||||||
|
self.directory_path = "/home/test_dir"
|
||||||
|
self.directory_id = directories_utils.create_directories(
|
||||||
|
self.server, self.directory_name, self.directory_path)
|
||||||
|
|
||||||
|
def runTest(self):
|
||||||
|
"""This function will delete Directory."""
|
||||||
|
directory_response = directories_utils.verify_directory(
|
||||||
|
self.server, self.directory_name)
|
||||||
|
if not directory_response:
|
||||||
|
raise Exception("Could not find the Directory to fetch.")
|
||||||
|
response = self.tester.delete(
|
||||||
|
"{0}{1}/{2}/{3}".format(self.url, utils.SERVER_GROUP,
|
||||||
|
self.server_id, self.directory_id),
|
||||||
|
follow_redirects=True)
|
||||||
|
self.assertEqual(response.status_code, 200)
|
||||||
|
|
||||||
|
def tearDown(self):
|
||||||
|
"""This function delete the Directory from the database."""
|
||||||
|
connection = utils.get_db_connection(self.server['db'],
|
||||||
|
self.server['username'],
|
||||||
|
self.server['db_password'],
|
||||||
|
self.server['host'],
|
||||||
|
self.server['port'],
|
||||||
|
self.server['sslmode'])
|
||||||
|
directories_utils.delete_directories(connection, self.directory_name)
|
||||||
|
|
@ -0,0 +1,96 @@
|
||||||
|
##########################################################################
|
||||||
|
#
|
||||||
|
# pgAdmin 4 - PostgreSQL Tools
|
||||||
|
#
|
||||||
|
# Copyright (C) 2013 - 2025, The pgAdmin Development Team
|
||||||
|
# This software is released under the PostgreSQL Licence
|
||||||
|
#
|
||||||
|
##########################################################################
|
||||||
|
|
||||||
|
import uuid
|
||||||
|
import json
|
||||||
|
|
||||||
|
from pgadmin.utils import server_utils
|
||||||
|
from pgadmin.utils.route import BaseTestGenerator
|
||||||
|
from regression import parent_node_dict
|
||||||
|
from regression.python_test_utils import test_utils as utils
|
||||||
|
from . import utils as directories_utils
|
||||||
|
|
||||||
|
|
||||||
|
class DirectoriesDeleteTestCase(BaseTestGenerator):
|
||||||
|
"""This class will delete the directories"""
|
||||||
|
scenarios = [
|
||||||
|
('Delete multiple directories',
|
||||||
|
dict(url='/browser/directory/obj/'))
|
||||||
|
]
|
||||||
|
|
||||||
|
def setUp(self):
|
||||||
|
self.server_id = parent_node_dict["server"][-1]["server_id"]
|
||||||
|
server_response = server_utils.connect_server(self, self.server_id)
|
||||||
|
if server_response["info"] != "Server connected.":
|
||||||
|
raise Exception("Could not connect to server to add directory.")
|
||||||
|
if "type" in server_response["data"]:
|
||||||
|
if server_response["data"]["type"] == "pg":
|
||||||
|
message = "directories are not supported by PG."
|
||||||
|
self.skipTest(message)
|
||||||
|
else:
|
||||||
|
if server_response["data"]["version"] < 130000:
|
||||||
|
message = "directories are not supported by EPAS 12 " \
|
||||||
|
"and below."
|
||||||
|
self.skipTest(message)
|
||||||
|
self.directory_names = ["test_directory_delete%s" %
|
||||||
|
str(uuid.uuid4())[1:8],
|
||||||
|
"test_directory_delete%s" %
|
||||||
|
str(uuid.uuid4())[1:8]]
|
||||||
|
self.directory_paths = ["/home/test_dir", "/home/test_dir1"]
|
||||||
|
self.directory_ids = [
|
||||||
|
directories_utils.create_directories(
|
||||||
|
self.server, self.directory_names[0], self.directory_paths[0]),
|
||||||
|
directories_utils.create_directories(
|
||||||
|
self.server, self.directory_names[1], self.directory_paths[1])]
|
||||||
|
|
||||||
|
def runTest(self):
|
||||||
|
"""This function will delete directories."""
|
||||||
|
directory_response = directories_utils.verify_directory(
|
||||||
|
self.server, self.directory_names[0])
|
||||||
|
if not directory_response:
|
||||||
|
raise Exception("Could not find the directory to fetch.")
|
||||||
|
|
||||||
|
directory_response = directories_utils.verify_directory(
|
||||||
|
self.server, self.directory_names[1])
|
||||||
|
if not directory_response:
|
||||||
|
raise Exception("Could not find the directory to fetch.")
|
||||||
|
|
||||||
|
data = {'ids': self.directory_ids}
|
||||||
|
response = self.tester.delete(
|
||||||
|
"{0}{1}/{2}/".format(self.url,
|
||||||
|
utils.SERVER_GROUP,
|
||||||
|
self.server_id),
|
||||||
|
follow_redirects=True,
|
||||||
|
data=json.dumps(data),
|
||||||
|
content_type='html/json'
|
||||||
|
)
|
||||||
|
self.assertEqual(response.status_code, 200)
|
||||||
|
|
||||||
|
def tearDown(self):
|
||||||
|
"""This function delete the directory from the database."""
|
||||||
|
connection = utils.get_db_connection(self.server['db'],
|
||||||
|
self.server['username'],
|
||||||
|
self.server['db_password'],
|
||||||
|
self.server['host'],
|
||||||
|
self.server['port'],
|
||||||
|
self.server['sslmode'])
|
||||||
|
directories_utils.delete_directories(
|
||||||
|
connection,
|
||||||
|
self.directory_names[0]
|
||||||
|
)
|
||||||
|
connection = utils.get_db_connection(self.server['db'],
|
||||||
|
self.server['username'],
|
||||||
|
self.server['db_password'],
|
||||||
|
self.server['host'],
|
||||||
|
self.server['port'],
|
||||||
|
self.server['sslmode'])
|
||||||
|
directories_utils.delete_directories(
|
||||||
|
connection,
|
||||||
|
self.directory_names[1]
|
||||||
|
)
|
||||||
|
|
@ -0,0 +1,65 @@
|
||||||
|
##########################################################################
|
||||||
|
#
|
||||||
|
# pgAdmin 4 - PostgreSQL Tools
|
||||||
|
#
|
||||||
|
# Copyright (C) 2013 - 2025, The pgAdmin Development Team
|
||||||
|
# This software is released under the PostgreSQL Licence
|
||||||
|
#
|
||||||
|
##########################################################################
|
||||||
|
|
||||||
|
import uuid
|
||||||
|
|
||||||
|
from pgadmin.utils import server_utils
|
||||||
|
from pgadmin.utils.route import BaseTestGenerator
|
||||||
|
from regression import parent_node_dict
|
||||||
|
from regression.python_test_utils import test_utils as utils
|
||||||
|
from . import utils as directorys_utils
|
||||||
|
|
||||||
|
|
||||||
|
class DirectoriesGetTestCase(BaseTestGenerator):
|
||||||
|
"""This class will get the directories"""
|
||||||
|
scenarios = [
|
||||||
|
('Get directories', dict(url='/browser/directory/obj/'))
|
||||||
|
]
|
||||||
|
|
||||||
|
def setUp(self):
|
||||||
|
self.server_id = parent_node_dict["server"][-1]["server_id"]
|
||||||
|
server_response = server_utils.connect_server(self, self.server_id)
|
||||||
|
if server_response["info"] != "Server connected.":
|
||||||
|
raise Exception("Could not connect to server to add directories")
|
||||||
|
if "type" in server_response["data"]:
|
||||||
|
if server_response["data"]["type"] == "pg":
|
||||||
|
message = "directories are not supported by PG."
|
||||||
|
self.skipTest(message)
|
||||||
|
else:
|
||||||
|
if server_response["data"]["version"] < 13000:
|
||||||
|
message = "directories are not supported by EPAS 12" \
|
||||||
|
" and below."
|
||||||
|
self.skipTest(message)
|
||||||
|
self.directory_name = "test_directory_get%s" % \
|
||||||
|
str(uuid.uuid4())[1:8]
|
||||||
|
self.directory_path = "/home/test_dir"
|
||||||
|
self.directory_id = directorys_utils.create_directories(
|
||||||
|
self.server, self.directory_name, self.directory_path)
|
||||||
|
|
||||||
|
def runTest(self):
|
||||||
|
"""This function will get the directories."""
|
||||||
|
directory_response = directorys_utils.verify_directory(
|
||||||
|
self.server, self.directory_name)
|
||||||
|
if not directory_response:
|
||||||
|
raise Exception("Could not find the directory to fetch.")
|
||||||
|
response = self.tester.get(
|
||||||
|
"{0}{1}/{2}/{3}".format(self.url, utils.SERVER_GROUP,
|
||||||
|
self.server_id, self.directory_id),
|
||||||
|
follow_redirects=True)
|
||||||
|
self.assertEqual(response.status_code, 200)
|
||||||
|
|
||||||
|
def tearDown(self):
|
||||||
|
"""This function delete the directory from the database."""
|
||||||
|
connection = utils.get_db_connection(self.server['db'],
|
||||||
|
self.server['username'],
|
||||||
|
self.server['db_password'],
|
||||||
|
self.server['host'],
|
||||||
|
self.server['port'],
|
||||||
|
self.server['sslmode'])
|
||||||
|
directorys_utils.delete_directories(connection, self.directory_name)
|
||||||
|
|
@ -0,0 +1,81 @@
|
||||||
|
##########################################################################
|
||||||
|
#
|
||||||
|
# pgAdmin 4 - PostgreSQL Tools
|
||||||
|
#
|
||||||
|
# Copyright (C) 2013 - 2025, The pgAdmin Development Team
|
||||||
|
# This software is released under the PostgreSQL Licence
|
||||||
|
#
|
||||||
|
##########################################################################
|
||||||
|
|
||||||
|
import json
|
||||||
|
import uuid
|
||||||
|
|
||||||
|
from pgadmin.utils import server_utils
|
||||||
|
from pgadmin.utils.route import BaseTestGenerator
|
||||||
|
from regression import parent_node_dict
|
||||||
|
from regression.python_test_utils import test_utils as utils
|
||||||
|
from . import utils as directories_utils
|
||||||
|
from . import utils as roles_utils
|
||||||
|
from regression.python_test_utils import test_utils
|
||||||
|
|
||||||
|
|
||||||
|
class DirectoriesPutTestCase(BaseTestGenerator):
|
||||||
|
"""This class will update the directories"""
|
||||||
|
scenarios = [
|
||||||
|
('Put directories', dict(url='/browser/directory/obj/'))
|
||||||
|
]
|
||||||
|
|
||||||
|
def setUp(self):
|
||||||
|
self.server_id = parent_node_dict["server"][-1]["server_id"]
|
||||||
|
server_response = server_utils.connect_server(self, self.server_id)
|
||||||
|
if server_response["info"] != "Server connected.":
|
||||||
|
raise Exception("Could not connect to server to add directories.")
|
||||||
|
if "type" in server_response["data"]:
|
||||||
|
if server_response["data"]["type"] == "pg":
|
||||||
|
message = "directories are not supported by PG."
|
||||||
|
self.skipTest(message)
|
||||||
|
else:
|
||||||
|
if server_response["data"]["version"] < 130000:
|
||||||
|
message = "directories are not supported by EPAS 12" \
|
||||||
|
" and below."
|
||||||
|
self.skipTest(message)
|
||||||
|
self.directory_name = "test_directory_put%s" % \
|
||||||
|
str(uuid.uuid4())[1:8]
|
||||||
|
self.directory_path = "/home/test_dir"
|
||||||
|
self.directory_id = directories_utils.create_directories(
|
||||||
|
self.server, self.directory_name, self.directory_path)
|
||||||
|
self.role_name = "role_for_directory_%s" % \
|
||||||
|
str(uuid.uuid4())[1:8]
|
||||||
|
self.role = roles_utils.create_superuser_role(
|
||||||
|
self.server, self.role_name)
|
||||||
|
|
||||||
|
def runTest(self):
|
||||||
|
"""This function will update the directories."""
|
||||||
|
directory_response = directories_utils.verify_directory(
|
||||||
|
self.server, self.directory_name)
|
||||||
|
if not directory_response:
|
||||||
|
raise Exception("Could not find the directory to fetch.")
|
||||||
|
self.directory_user = "test_directory_put%s" % \
|
||||||
|
str(uuid.uuid4())[1:8]
|
||||||
|
data = {"id": self.directory_id,
|
||||||
|
"diruser": self.role_name}
|
||||||
|
url = '{0}{1}/{2}/{3}'.format(
|
||||||
|
self.url, utils.SERVER_GROUP, self.server_id,
|
||||||
|
self.directory_id)
|
||||||
|
response = self.tester.put(
|
||||||
|
url,
|
||||||
|
data=json.dumps(data),
|
||||||
|
follow_redirects=True
|
||||||
|
)
|
||||||
|
self.assertEqual(response.status_code, 200)
|
||||||
|
|
||||||
|
def tearDown(self):
|
||||||
|
"""This function delete the directory and role from the database."""
|
||||||
|
connection = utils.get_db_connection(self.server['db'],
|
||||||
|
self.server['username'],
|
||||||
|
self.server['db_password'],
|
||||||
|
self.server['host'],
|
||||||
|
self.server['port'],
|
||||||
|
self.server['sslmode'])
|
||||||
|
directories_utils.delete_directories(connection, self.directory_name)
|
||||||
|
test_utils.drop_role(self.server, "postgres", self.role)
|
||||||
|
|
@ -0,0 +1,122 @@
|
||||||
|
##########################################################################
|
||||||
|
#
|
||||||
|
# pgAdmin 4 - PostgreSQL Tools
|
||||||
|
#
|
||||||
|
# Copyright (C) 2013 - 2025, The pgAdmin Development Team
|
||||||
|
# This software is released under the PostgreSQL Licence
|
||||||
|
#
|
||||||
|
##########################################################################
|
||||||
|
|
||||||
|
|
||||||
|
import sys
|
||||||
|
import traceback
|
||||||
|
|
||||||
|
from regression.python_test_utils import test_utils as utils
|
||||||
|
from pgadmin.browser.server_groups.servers.roles.tests import \
|
||||||
|
utils as roles_utils
|
||||||
|
|
||||||
|
|
||||||
|
def create_directories(
|
||||||
|
server,
|
||||||
|
directory_name,
|
||||||
|
directory_path,
|
||||||
|
):
|
||||||
|
"""
|
||||||
|
This function create the directories into databases.
|
||||||
|
"""
|
||||||
|
try:
|
||||||
|
connection = utils.get_db_connection(server['db'],
|
||||||
|
server['username'],
|
||||||
|
server['db_password'],
|
||||||
|
server['host'],
|
||||||
|
server['port'],
|
||||||
|
server['sslmode'])
|
||||||
|
old_isolation_level = connection.isolation_level
|
||||||
|
utils.set_isolation_level(connection, 0)
|
||||||
|
pg_cursor = connection.cursor()
|
||||||
|
sql = f"CREATE DIRECTORY {directory_name} AS '{directory_path}'"
|
||||||
|
pg_cursor.execute(sql)
|
||||||
|
utils.set_isolation_level(connection, old_isolation_level)
|
||||||
|
connection.commit()
|
||||||
|
# Get oid of newly created directory.
|
||||||
|
pg_cursor.execute("SELECT oid FROM pg_catalog.edb_dir WHERE "
|
||||||
|
" dirname='%s'" % directory_name)
|
||||||
|
directory = pg_cursor.fetchone()
|
||||||
|
directory_id = directory[0]
|
||||||
|
connection.close()
|
||||||
|
return directory_id
|
||||||
|
except Exception:
|
||||||
|
traceback.print_exc(file=sys.stderr)
|
||||||
|
|
||||||
|
|
||||||
|
def verify_directory(server, directory_name):
|
||||||
|
"""
|
||||||
|
This function verifies the directory exist in database or not.
|
||||||
|
"""
|
||||||
|
try:
|
||||||
|
connection = utils.get_db_connection(server['db'],
|
||||||
|
server['username'],
|
||||||
|
server['db_password'],
|
||||||
|
server['host'],
|
||||||
|
server['port'],
|
||||||
|
server['sslmode'])
|
||||||
|
pg_cursor = connection.cursor()
|
||||||
|
pg_cursor.execute("SELECT * FROM pg_catalog.edb_dir WHERE "
|
||||||
|
" dirname='%s'" % directory_name)
|
||||||
|
directory = pg_cursor.fetchone()
|
||||||
|
connection.close()
|
||||||
|
return directory
|
||||||
|
except Exception:
|
||||||
|
traceback.print_exc(file=sys.stderr)
|
||||||
|
|
||||||
|
|
||||||
|
def delete_directories(connection, directory_name):
|
||||||
|
"""
|
||||||
|
This function deletes the directory.
|
||||||
|
"""
|
||||||
|
try:
|
||||||
|
pg_cursor = connection.cursor()
|
||||||
|
pg_cursor.execute("SELECT * FROM pg_catalog.edb_dir WHERE"
|
||||||
|
" dirname='%s'" % directory_name)
|
||||||
|
directory_name_count = len(pg_cursor.fetchall())
|
||||||
|
if directory_name_count:
|
||||||
|
old_isolation_level = connection.isolation_level
|
||||||
|
utils.set_isolation_level(connection, 0)
|
||||||
|
pg_cursor.execute("DROP DIRECTORY IF EXISTS %s" % directory_name)
|
||||||
|
utils.set_isolation_level(connection, old_isolation_level)
|
||||||
|
connection.commit()
|
||||||
|
connection.close()
|
||||||
|
except Exception:
|
||||||
|
traceback.print_exc(file=sys.stderr)
|
||||||
|
|
||||||
|
|
||||||
|
def create_superuser_role(server, role_name):
|
||||||
|
"""
|
||||||
|
This function create the role as superuser.
|
||||||
|
"""
|
||||||
|
try:
|
||||||
|
connection = utils.get_db_connection(server['db'],
|
||||||
|
server['username'],
|
||||||
|
server['db_password'],
|
||||||
|
server['host'],
|
||||||
|
server['port'],
|
||||||
|
server['sslmode'])
|
||||||
|
old_isolation_level = connection.isolation_level
|
||||||
|
utils.set_isolation_level(connection, 0)
|
||||||
|
pg_cursor = connection.cursor()
|
||||||
|
sql = '''
|
||||||
|
CREATE USER "%s" WITH
|
||||||
|
SUPERUSER
|
||||||
|
''' % (role_name)
|
||||||
|
pg_cursor.execute(sql)
|
||||||
|
utils.set_isolation_level(connection, old_isolation_level)
|
||||||
|
connection.commit()
|
||||||
|
# Get oid of newly created directory
|
||||||
|
pg_cursor.execute("SELECT usename FROM pg_user WHERE "
|
||||||
|
" usename='%s'" % role_name)
|
||||||
|
user_role = pg_cursor.fetchone()
|
||||||
|
role_username = user_role[0]
|
||||||
|
connection.close()
|
||||||
|
return role_username
|
||||||
|
except Exception:
|
||||||
|
traceback.print_exc(file=sys.stderr)
|
||||||
|
|
@ -35,7 +35,7 @@ class ResourceGroupsAddTestCase(BaseTestGenerator):
|
||||||
self.skipTest(message)
|
self.skipTest(message)
|
||||||
else:
|
else:
|
||||||
if server_con["data"]["version"] < 90400:
|
if server_con["data"]["version"] < 90400:
|
||||||
message = "Resource groups are not supported by PPAS 9.3" \
|
message = "Resource groups are not supported by EPAS 9.3" \
|
||||||
" and below."
|
" and below."
|
||||||
self.skipTest(message)
|
self.skipTest(message)
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -34,7 +34,7 @@ class ResourceGroupsDeleteTestCase(BaseTestGenerator):
|
||||||
self.skipTest(message)
|
self.skipTest(message)
|
||||||
else:
|
else:
|
||||||
if server_response["data"]["version"] < 90400:
|
if server_response["data"]["version"] < 90400:
|
||||||
message = "Resource groups are not supported by PPAS " \
|
message = "Resource groups are not supported by EPAS " \
|
||||||
"9.3 and below."
|
"9.3 and below."
|
||||||
self.skipTest(message)
|
self.skipTest(message)
|
||||||
self.resource_group = "test_resource_group_delete%s" % \
|
self.resource_group = "test_resource_group_delete%s" % \
|
||||||
|
|
|
||||||
|
|
@ -36,7 +36,7 @@ class ResourceGroupsDeleteTestCase(BaseTestGenerator):
|
||||||
self.skipTest(message)
|
self.skipTest(message)
|
||||||
else:
|
else:
|
||||||
if server_response["data"]["version"] < 90400:
|
if server_response["data"]["version"] < 90400:
|
||||||
message = "Resource groups are not supported by PPAS " \
|
message = "Resource groups are not supported by EPAS " \
|
||||||
"9.3 and below."
|
"9.3 and below."
|
||||||
self.skipTest(message)
|
self.skipTest(message)
|
||||||
self.resource_groups = ["test_resource_group_delete%s" %
|
self.resource_groups = ["test_resource_group_delete%s" %
|
||||||
|
|
|
||||||
|
|
@ -35,7 +35,7 @@ class ResourceGroupsPutTestCase(BaseTestGenerator):
|
||||||
self.skipTest(message)
|
self.skipTest(message)
|
||||||
else:
|
else:
|
||||||
if server_response["data"]["version"] < 90400:
|
if server_response["data"]["version"] < 90400:
|
||||||
message = "Resource groups are not supported by PPAS 9.3" \
|
message = "Resource groups are not supported by EPAS 9.3" \
|
||||||
" and below."
|
" and below."
|
||||||
self.skipTest(message)
|
self.skipTest(message)
|
||||||
self.resource_group_name = "test_resource_group_put%s" % \
|
self.resource_group_name = "test_resource_group_put%s" % \
|
||||||
|
|
|
||||||
|
|
@ -34,7 +34,7 @@ class ResourceGroupsGetTestCase(BaseTestGenerator):
|
||||||
self.skipTest(message)
|
self.skipTest(message)
|
||||||
else:
|
else:
|
||||||
if server_response["data"]["version"] < 90400:
|
if server_response["data"]["version"] < 90400:
|
||||||
message = "Resource groups are not supported by PPAS 9.3" \
|
message = "Resource groups are not supported by EPAS 9.3" \
|
||||||
" and below."
|
" and below."
|
||||||
self.skipTest(message)
|
self.skipTest(message)
|
||||||
self.resource_group = "test_resource_group_get%s" % \
|
self.resource_group = "test_resource_group_get%s" % \
|
||||||
|
|
|
||||||
|
|
@ -80,7 +80,7 @@
|
||||||
"new_role_name": "CURRENT_ROLE"
|
"new_role_name": "CURRENT_ROLE"
|
||||||
},
|
},
|
||||||
"server_min_version": 140000,
|
"server_min_version": 140000,
|
||||||
"skip_msg": "CURRENT_ROLE are not supported by PPAS/PG 13.0 and below.",
|
"skip_msg": "CURRENT_ROLE are not supported by EPAS/PG 13.0 and below.",
|
||||||
"mocking_required": false,
|
"mocking_required": false,
|
||||||
"mock_data": {},
|
"mock_data": {},
|
||||||
"expected_data": {
|
"expected_data": {
|
||||||
|
|
@ -196,7 +196,7 @@
|
||||||
"new_role_name": "CURRENT_ROLE"
|
"new_role_name": "CURRENT_ROLE"
|
||||||
},
|
},
|
||||||
"server_min_version": 140000,
|
"server_min_version": 140000,
|
||||||
"skip_msg": "CURRENT_ROLE are not supported by PPAS/PG 13.0 and below.",
|
"skip_msg": "CURRENT_ROLE are not supported by EPAS/PG 13.0 and below.",
|
||||||
"mocking_required": false,
|
"mocking_required": false,
|
||||||
"mock_data": {},
|
"mock_data": {},
|
||||||
"expected_data": {
|
"expected_data": {
|
||||||
|
|
|
||||||
|
|
@ -114,7 +114,9 @@ def parse_priv_to_db(str_privileges, allowed_acls=[]):
|
||||||
'T': 'TEMPORARY',
|
'T': 'TEMPORARY',
|
||||||
'a': 'INSERT',
|
'a': 'INSERT',
|
||||||
'r': 'SELECT',
|
'r': 'SELECT',
|
||||||
|
'R': 'READ',
|
||||||
'w': 'UPDATE',
|
'w': 'UPDATE',
|
||||||
|
'W': 'WRITE',
|
||||||
'd': 'DELETE',
|
'd': 'DELETE',
|
||||||
'D': 'TRUNCATE',
|
'D': 'TRUNCATE',
|
||||||
'x': 'REFERENCES',
|
'x': 'REFERENCES',
|
||||||
|
|
|
||||||
|
|
@ -81,7 +81,7 @@ define('pgadmin.browser.utils',
|
||||||
'coll-role', 'role', 'coll-resource_group', 'resource_group',
|
'coll-role', 'role', 'coll-resource_group', 'resource_group',
|
||||||
'coll-database', 'coll-pga_job', 'coll-pga_schedule', 'coll-pga_jobstep',
|
'coll-database', 'coll-pga_job', 'coll-pga_schedule', 'coll-pga_jobstep',
|
||||||
'pga_job', 'pga_schedule', 'pga_jobstep',
|
'pga_job', 'pga_schedule', 'pga_jobstep',
|
||||||
'coll-replica_node', 'replica_node'
|
'coll-replica_node', 'replica_node','coll-directory','directory'
|
||||||
];
|
];
|
||||||
|
|
||||||
pgBrowser.utils = {
|
pgBrowser.utils = {
|
||||||
|
|
|
||||||
|
|
@ -36,6 +36,8 @@ export default function Privilege({value, onChange, controlProps}) {
|
||||||
'c': 'CONNECT',
|
'c': 'CONNECT',
|
||||||
'a': 'INSERT',
|
'a': 'INSERT',
|
||||||
'r': 'SELECT',
|
'r': 'SELECT',
|
||||||
|
'R': 'READ',
|
||||||
|
'W': 'WRITE',
|
||||||
'w': 'UPDATE',
|
'w': 'UPDATE',
|
||||||
'd': 'DELETE',
|
'd': 'DELETE',
|
||||||
'D': 'TRUNCATE',
|
'D': 'TRUNCATE',
|
||||||
|
|
|
||||||
|
|
@ -22,7 +22,8 @@ node_info_dict = {
|
||||||
"did": [], # database
|
"did": [], # database
|
||||||
"lrid": [], # role
|
"lrid": [], # role
|
||||||
"tsid": [], # tablespace
|
"tsid": [], # tablespace
|
||||||
"scid": [] # schema
|
"scid": [], # schema
|
||||||
|
"oid": [] # directory
|
||||||
}
|
}
|
||||||
|
|
||||||
global parent_node_dict
|
global parent_node_dict
|
||||||
|
|
@ -31,5 +32,6 @@ parent_node_dict = {
|
||||||
"database": [],
|
"database": [],
|
||||||
"tablespace": [],
|
"tablespace": [],
|
||||||
"role": [],
|
"role": [],
|
||||||
"schema": []
|
"schema": [],
|
||||||
|
"directory": []
|
||||||
}
|
}
|
||||||
|
|
|
||||||
|
|
@ -0,0 +1,49 @@
|
||||||
|
/////////////////////////////////////////////////////////////
|
||||||
|
//
|
||||||
|
// pgAdmin 4 - PostgreSQL Tools
|
||||||
|
//
|
||||||
|
// Copyright (C) 2013 - 2025, The pgAdmin Development Team
|
||||||
|
// This software is released under the PostgreSQL Licence
|
||||||
|
//
|
||||||
|
//////////////////////////////////////////////////////////////
|
||||||
|
|
||||||
|
|
||||||
|
import BaseUISchema from 'sources/SchemaView/base_schema.ui';
|
||||||
|
import DirectorySchema from '../../../pgadmin/browser/server_groups/servers/directories/static/js/directory.ui';
|
||||||
|
|
||||||
|
import {genericBeforeEach, getCreateView, getEditView, getPropertiesView} from '../genericFunctions';
|
||||||
|
|
||||||
|
class MockSchema extends BaseUISchema {
|
||||||
|
get baseFields() {
|
||||||
|
return [];
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
describe('DirectorySchema', ()=>{
|
||||||
|
|
||||||
|
const createSchemaObject = () => new DirectorySchema(
|
||||||
|
()=>new MockSchema(),
|
||||||
|
{
|
||||||
|
role: ()=>[],
|
||||||
|
nodeInfo: {server: {user: {name:'ppass', id:0}}}
|
||||||
|
},
|
||||||
|
);
|
||||||
|
let getInitData = ()=>Promise.resolve({});
|
||||||
|
|
||||||
|
beforeEach(()=>{
|
||||||
|
genericBeforeEach();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('create', async ()=>{
|
||||||
|
await getCreateView(createSchemaObject());
|
||||||
|
});
|
||||||
|
|
||||||
|
it('edit', async ()=>{
|
||||||
|
await getEditView(createSchemaObject(), getInitData);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('properties', async ()=>{
|
||||||
|
await getPropertiesView(createSchemaObject(), getInitData);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
|
@ -179,6 +179,7 @@ module.exports = [{
|
||||||
'pure|pgadmin.node.publication',
|
'pure|pgadmin.node.publication',
|
||||||
'pure|pgadmin.node.subscription',
|
'pure|pgadmin.node.subscription',
|
||||||
'pure|pgadmin.node.tablespace',
|
'pure|pgadmin.node.tablespace',
|
||||||
|
'pure|pgadmin.node.directory',
|
||||||
'pure|pgadmin.node.resource_group',
|
'pure|pgadmin.node.resource_group',
|
||||||
'pure|pgadmin.node.event_trigger',
|
'pure|pgadmin.node.event_trigger',
|
||||||
'pure|pgadmin.node.extension',
|
'pure|pgadmin.node.extension',
|
||||||
|
|
|
||||||
|
|
@ -128,6 +128,7 @@ let webpackShimConfig = {
|
||||||
'pgadmin.node.synonym': path.join(__dirname, './pgadmin/browser/server_groups/servers/databases/schemas/synonyms/static/js/synonym'),
|
'pgadmin.node.synonym': path.join(__dirname, './pgadmin/browser/server_groups/servers/databases/schemas/synonyms/static/js/synonym'),
|
||||||
'pgadmin.node.table': path.join(__dirname, './pgadmin/browser/server_groups/servers/databases/schemas/tables/static/js/table'),
|
'pgadmin.node.table': path.join(__dirname, './pgadmin/browser/server_groups/servers/databases/schemas/tables/static/js/table'),
|
||||||
'pgadmin.node.tablespace': path.join(__dirname, './pgadmin/browser/server_groups/servers/tablespaces/static/js/tablespace'),
|
'pgadmin.node.tablespace': path.join(__dirname, './pgadmin/browser/server_groups/servers/tablespaces/static/js/tablespace'),
|
||||||
|
'pgadmin.node.directory': path.join(__dirname, './pgadmin/browser/server_groups/servers/directories/static/js/directory'),
|
||||||
'pgadmin.node.trigger': path.join(__dirname, './pgadmin/browser/server_groups/servers/databases/schemas/tables/triggers/static/js/trigger'),
|
'pgadmin.node.trigger': path.join(__dirname, './pgadmin/browser/server_groups/servers/databases/schemas/tables/triggers/static/js/trigger'),
|
||||||
'pgadmin.node.trigger_function': path.join(__dirname, './pgadmin/browser/server_groups/servers/databases/schemas/functions/static/js/trigger_function'),
|
'pgadmin.node.trigger_function': path.join(__dirname, './pgadmin/browser/server_groups/servers/databases/schemas/functions/static/js/trigger_function'),
|
||||||
'pgadmin.node.type': path.join(__dirname, './pgadmin/browser/server_groups/servers/databases/schemas/types/static/js/type'),
|
'pgadmin.node.type': path.join(__dirname, './pgadmin/browser/server_groups/servers/databases/schemas/types/static/js/type'),
|
||||||
|
|
|
||||||
Loading…
Reference in New Issue