Crossposting from the github on the off chance others have solved this, since there seems to be very little activity from the developers there.
We're trying to automate config backups nightly for an aruba switch stack (currently one, we're planning to replace existing brocades with these and want a solution in place before we do so).
Model/OS Version:
Product: Aruba JL322
Name: Aruba 2930M-48G-PoE+ Switch
Date: Nov 1 2019 19:24:11
Build: 208
Version: WC.16.10.0002
We have the following in the hosts:
[aruba-sitecode]
HOSTNAME ansible_host=#.#.#.# ansible_network_os=aruba ansible_connection=local
[aruba-blr:vars]
ansible_user=serviceaccount
ansible_pass=password
ansible_command_timeout=80
And this playbook:
hosts: aruba-sitecode
gather_facts: false
vars:
date: "{{ lookup('pipe', 'date +%Y%m%d') }}"
filename: "running_config_{{ inventory_hostname }}_{{date}}.txt"
tasks:
- name: show run
arubaoss_config_bkup:
config_type: CT_RUNNING_CONFIG
server_type: ST_TFTP
server_ip: #.#.#.#
file_name: "{{filename}}"
use_ssl: True
user_name: "{{ ansible_user }}"
password: "{{ ansible_pass }}"
We are able to authenticate against the api with curl/postman and get a cookie. When we attempt to run the playbook we get the following failure:
fatal: [HOSTNAME]: FAILED! => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"body": "<TITLE>400 Bad Request</TITLE>
Bad Request
Access is unauthorized.
",
"changed": false,
"connection": "close",
"content-length": "89",
"content-type": "text/html",
"invocation": {
"module_args": {
"api_version": "v7.0",
"config_type": "CT_RUNNING_CONFIG",
"file_name": "running_config_BLR-ARUBASTACK-01_20200102.txt",
"forced_reboot": null,
"host": "#.#.#.#",
"password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"port": 443,
"provider": {
"api_version": null,
"host": "#.#.#.#",
"password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"port": 443,
"ssh_keyfile": null,
"timeout": 30,
"transport": "aossapi",
"use_proxy": false,
"use_ssl": true,
"username": "serviceaccount",
"validate_certs": false
},
"recovery_mode": null,
"server_ip": "TFTPSERVERIP",
"server_name": null,
"server_passwd": null,
"server_type": "ST_TFTP",
"sftp_port": 22,
"ssh_keyfile": null,
"state": "create",
"timeout": 30,
"use_ssl": true,
"user_name": "serviceaccount",
"username": "serviceaccount",
"validate_certs": false,
"wait_for_apply": true
}
},
"msg": "HTTP Error 400: Bad Request",
"server": "eHTTP v2.0",
"status": 400,
"url": "https://#.#.#.#:443/rest/v7.0/system/config/cfg_backup_files"
}
We have the following questions:
Is there something obviously wrong with the playbook or am I omitting a setting? The only things I've been able to find on the airheads community are either people complaining about a lack of depth for the api and ansible documentation.
Do we need to specify an api version? If I specify nothing I receive:
None is not valid api version. using aossapi v6.0 instead
if I specify 6.0(or 6, or 7.0) I receive:
6.0 is not valid api version. using aossapi v6.0 instead
And the output has a v7.0 uri.
Are we able to specify an SSL port? We would like the web interface port configured 8443 but as far as I can tell there isn't a way to feed it a port.
Last, is there a better way to get a running config locally to our ansible system? Its much easier on our arista, brocade, and cisco devices as we can just run show run and drop the output to file.