
Mandatory Skills required:
1. Ability to translate business requirements into technical requirements for QlikView
2. Perform detailed analysis of source systems and source system data and model that data in
Qlikview
3. Design, develop, and test QlikView scripts to import data from source systems, data feeds, flat files
to create Qlik marts
4. Proficiency with QlikView Scripting, use of complex QlikView functions,
advanced QlikView Expressions, experience with complex data models and optimization of data
model for query performance to create Qlik Marts
5. Architecture optimization ( includes hardware sizing , security setup , performance tuning )
6. Development of Qlikview/ QlikSense

Similar jobs
Key Responsibilities:
- Develop, maintain, and optimize data pipelines using DBT and SQL.
- Collaborate with data analysts and business teams to build scalable data models.
- Implement data transformations, testing, and documentation within the DBT framework.
- Work with Snowflake for data warehousing tasks, including data ingestion, query optimization, and performance tuning.
- Use Python (preferred) for automation, scripting, and additional data processing as needed.
Required Skills:
- 4–6 years of experience in data engineering or related roles.
- Strong hands-on expertise with DBT and advanced SQL.
- Experience working with modern data warehouses, preferably Snowflake.
- Knowledge of Python for data manipulation and workflow automation (preferred but not mandatory).
- Good understanding of data modeling concepts, ETL/ELT processes, and best practices.
Company Overview:
Davis Index is a leading market intelligence platform and publication that specializes in providing accurate and up-to-date price benchmarks for ferrous and non-ferrous scrap, as well as primary metals. Our dedicated team of reporters, analysts, and data specialists publishes and processes over 1,400 proprietary price indexes, metals futures prices, and other reference data. In addition, we offer market intelligence, news, and analysis through an industry-leading technology platform. With a global presence across the Americas, Asia, Europe, and Africa, our team of over 50 professionals works tirelessly to deliver essential market insights to our valued clients.
Job Overview:
We are seeking a skilled Cloud Engineer to join our team. The ideal candidate will have a strong foundation in cloud technologies and a knack for automating infrastructure processes. You will be responsible for deploying and managing cloud-based solutions while ensuring optimal performance and reliability.
Key Responsibilities:
- Design, deploy, and manage cloud infrastructure solutions.
- Automate infrastructure setup and management using Terraform or Ansible.
- Manage and maintain Kubernetes clusters for containerized applications.
- Work with Linux systems for server management and troubleshooting.
- Configure load balancers to route traffic efficiently.
- Set up and manage database instances along with failover replicas.
Required Skills and Qualifications:
- Minimum of Cloud Practitioner Certification.
- Proficiency with Linux systems.
- Hands-on experience with Kubernetes.
- Expertise in writing automation scripts using Terraform or Ansible.
- Strong understanding of cloud computing concepts.
Application Process: Candidates are encouraged to apply with their resumes and examples of relevant automation scripts they have written in Terraform or Ansible.
• Analyze complex data sets to identify trends, patterns, and insights to drive business decisions.
• Collaborate with cross-functional teams in an Agile environment to define and implement data-driven solutions.
• Create and maintain detailed reports and dashboards to track key performance indicators (KPIs).
• Work closely with product owners, developers, and stakeholders to ensure data requirements are met effectively.
• Utilize Jira for task management, sprint planning, and tracking project progress.
• Provide guidance and mentorship to junior team members in data analysis techniques and best practices.
7+ years of experience of Workday HCM configuration across 4+ Workday modules and 5+ deployments
Key responsibilities
- Lead design/requirements workshops
- Highly confident in configuring workday with the ability to utilise Workday Community where necessary
- Highly confident in building accurate estimates for Workday configuration work
- Approve effort estimates of more junior consultants
- High level of understanding of how business utilise HR technology to their advantage
- Directly manage a team of consultants based in India
- Manage customer escalations around direct reports work
- Support in building the CloudRock team based in Indian through being an advocate for CloudRock and a leadership figure
- Support and mentor junior resource based in India/Portugal/UK
- Full understanding of the Workday deployment methodology
- Achieve a high level of utilisation
- Be consultative in their customer approach
Job Summary
We are looking for an experienced https://www.naukri.com/hitasoft-technology-solutions-jobs-careers-1893436">Android developer with solid experience of one to two years.
Moreover, the job is to create the best user-friendly application to support our products and customer requirements.
Preferred Skills
Android Development Tool like Eclipse, ADT and Android Development SDK knowledge, Android X is mandatory for experienced developers
A developer should have excellent knowledge in working with multiple resolutions of Android smartphones and tablets
Consequently, the developer should have enough skills to make the apps more stable
As well as additional knowledge about the https://www.hitasoft.com/careers/">REST API and other web services is preferable
Depth knowledge in in-app purchases and mobile payment gateways are most recommended
Responsibilities
The developer should learn the defensive programming methodology to develop any apps in the same way as followed
The developer should know how to use the graphics and code in an optimised way to consume a very reasonable volume of device memory
Title: Data Engineer (Azure) (Location: Gurgaon/Hyderabad)
Salary: Competitive as per Industry Standard
We are expanding our Data Engineering Team and hiring passionate professionals with extensive
knowledge and experience in building and managing large enterprise data and analytics platforms. We
are looking for creative individuals with strong programming skills, who can understand complex
business and architectural problems and develop solutions. The individual will work closely with the rest
of our data engineering and data science team in implementing and managing Scalable Smart Data
Lakes, Data Ingestion Platforms, Machine Learning and NLP based Analytics Platforms, Hyper-Scale
Processing Clusters, Data Mining and Search Engines.
What You’ll Need:
- 3+ years of industry experience in creating and managing end-to-end Data Solutions, Optimal
Data Processing Pipelines and Architecture dealing with large volume, big data sets of varied
data types.
- Proficiency in Python, Linux and shell scripting.
- Strong knowledge of working with PySpark dataframes, Pandas dataframes for writing efficient pre-processing and other data manipulation tasks.
● Strong experience in developing the infrastructure required for data ingestion, optimal
extraction, transformation, and loading of data from a wide variety of data sources using tools like Azure Data Factory, Azure Databricks (or Jupyter notebooks/ Google Colab) (or other similiar tools).
- Working knowledge of github or other version control tools.
- Experience with creating Restful web services and API platforms.
- Work with data science and infrastructure team members to implement practical machine
learning solutions and pipelines in production.
- Experience with cloud providers like Azure/AWS/GCP.
- Experience with SQL and NoSQL databases. MySQL/ Azure Cosmosdb / Hbase/MongoDB/ Elasticsearch etc.
- Experience with stream-processing systems: Spark-Streaming, Kafka etc and working experience with event driven architectures.
- Strong analytic skills related to working with unstructured datasets.
Good to have (to filter or prioritize candidates)
- Experience with testing libraries such as pytest for writing unit-tests for the developed code.
- Knowledge of Machine Learning algorithms and libraries would be good to have,
implementation experience would be an added advantage.
- Knowledge and experience of Datalake, Dockers and Kubernetes would be good to have.
- Knowledge of Azure functions , Elastic search etc will be good to have.
- Having experience with model versioning (mlflow) and data versioning will be beneficial
- Having experience with microservices libraries or with python libraries such as flask for hosting ml services and models would be great.
Experience required: 2+ years as a BDE












