Changes

An iOS Drag and Drop Tutorial

23,259 bytes added, 15:00, 27 March 2018
Created page with "<seo title="An iOS Drag and Drop Tutorial" titlemode="replace" keywords="ios 11, swift 4, drag and drop, tutorial, xcode 9" description="A tutorial outlining how to add drag..."
<seo title="An iOS Drag and Drop Tutorial" titlemode="replace" keywords="ios 11, swift 4, drag and drop, tutorial, xcode 9" description="A tutorial outlining how to add drag and drop support to your iOS app projects"></seo>

With the basics of drag and drop covered in the previous chapter, this chapter will begin to demonstrate some of the key features of drag and drop within the context of an example application. Specifically, the project created in this chapter will demonstrate the use of drag and drop to allow images and text to be transferred between two separate apps. This will include the implementation of both drag and drop delegates, customizing drag animation, working with spring loaded controls and the handling of dropped content.

== Creating the Drag and Drop Project ==

Launch Xcode and create a new iOS project named DragAndDrop using the Single View App template and with Swift selected as the programming language.

== Designing the User Interface ==

Select the Main.storyboard file and modify the scene layout so that it contains an ImageView, TextView and a Button and matches the layout shown in Figure 74 1 (note the width of the button has been extended to the left and right margins, given a gray background and assigned text that reads “Sepia On”):


[[File:]]

Figure 74-1


With the ImageView object selected, display the Attributes Inspector panel and change the Content Mode menu to Aspect Fit.

Display the Resolve Auto Layout Issues menu and select the Reset to Suggested Constraints option listed under All Views in View Controller, then delete the sample Latin text from the TextView.

Finally, display the Assistant Editor and establish an outlet connection to a variable in the ViewController.swift file named imageView. Repeat these steps for the TextView and Button, this time naming the variables textView and sepiaButton respectively. Also establish an action connection from the button to a method named switchFilter.

When this scene appears within the app it will be helpful to have borders displayed around the two views so that the user will know where to drop content during a drag and drop operation. Implement this now by editing the ViewController.swift file, adding a new method named initialize and calling it from the viewDidLoad method. Within this method, also enable spring loading on the sepiaButton object:

<pre>
.
.
override func viewDidLoad() {
super.viewDidLoad()
initialize()
}

func initialize() {

sepiaButton.isSpringLoaded = true

textView.layer.borderWidth = 1
textView.layer.borderColor = UIColor.lightGray.cgColor

imageView.layer.borderWidth = 1
imageView.layer.borderColor = UIColor.lightGray.cgColor
}
.
.
</pre>

Before testing the app project so far, add a variable to store the current sepia control setting and complete the switchFilter action method to toggle this variable setting and change the button title to reflect the button’s current mode:

<pre>
.
.
var sepiaFilter = true
.
.
@IBAction func switchFilter(_ sender: Any) {

sepiaFilter = sepiaFilter ? false : true

sepiaButton.setTitle(sepiaFilter ? "Sepia On" : "Sepia Off",
for: .normal)
}
</pre>

== Testing the Default Behavior ==

With an iPad device or simulator connected and running in landscape mode, launch the Safari browser and place it in the background. Repeat this step for the Photos app. Compile and run the DragAndDrop app and, once loaded, swipe up from the bottom of the screen to display the dock panel. From the dock, touch the Photos app icon and drag it upwards and to the right of the screen before releasing the touch so that it appears in split view mode as shown in Figure 74-2 below:


[[File:]]

Figure 74-2


Touch and hold on one of the images in the Photos app until the image lifts up ready for dragging. Drag the lifted image preview across to the ImageView in the DragAndDrop app and attempt to drop it. Note that because the view has not been configured to handle drops, the drop is cancelled and the image preview simply disappears.

Display the dock a second time, this time dragging the Safari app icon up and to the right before dropping it onto the Photos app. Once Safari has replaced the Photo apps in the split view panel, navigate to a web page containing text and then highlight and select a section of the text using the same technique you would use if you were planning a copy and paste operation. Perform a long press over the highlighted text until the drag preview appears, then drag the text so that it hovers over the TextView in the DragAndDrop app. In this case, a green + icon appears in the top right hand corner of the drag preview image (Figure 74 3) indicating that the view will accept the drop. Releasing the drag at this point causes the dragged text to appear in the TextView widget. This is because drop support is included by default with both the TextView and TextField views.


[[File:]]

Figure 74-3


Although drop support is built into the TextField and TextView, drag support is not. Verify this by performing a long touch on the TextView and observing that no preview image lifts from the view indicating that a drag session has started.

== Adding Drop Support to the Image View ==

The next step in implementing drag and drop support in this project is to allow images to be dropped onto the ImageView. After the image is dropped, and before it is displayed to the user, the app will also use a sepia filter to change the appearance of the image.

For the purposes of this example, the ViewController class will serve as the drop interaction delegate so edit the ViewController.swift file and modify the class to make this declaration:

<pre>
import UIKit

class ViewController: UIViewController, UIDropInteractionDelegate {
</pre>

Remaining in the ViewController class file, modify the code in the recently added initialize method to attach a drop interaction instance to the image view, with the delegate set to the current view controller. Also add a line of code to enable the isUserInteractionEnabled on the view. By default, the ImageView class discards all forms of user interaction such as touches, focus, presses and keyboard events. In order for drag and drop to function within the image view, therefore, this property is being enabled:

<pre>
func initialize() {

sepiaButton.isSpringLoaded = true

textView.layer.borderWidth = 1
textView.layer.borderColor = UIColor.lightGray.cgColor

imageView.layer.borderWidth = 1
imageView.layer.borderColor = UIColor.lightGray.cgColor

imageView.isUserInteractionEnabled = true
imageView.addInteraction(UIDropInteraction(delegate: self))
}
</pre>

When an image is dragged over the ImageView, the canHandle delegate will be called by the system to identify the types of content the view is able to handle. This method now needs to be implemented to indicate that the views in the app can potentially

<pre>
handle images and strings:
.
.
import MobileCoreServices
.
.
func dropInteraction(_ interaction: UIDropInteraction, canHandle session: UIDropSession) -> Bool {
return session.hasItemsConforming(toTypeIdentifiers:
[kUTTypeImage as String, kUTTypeUTF8PlainText as String]) &&
session.items.count == 1
}
</pre>

The two type strings used here are from a pre-defined list of Uniform Type Identifiers defined by Apple. A full list of available identifiers can be found online at:

https://developer.apple.com/library/content/documentation/Miscellaneous/Reference/UTIRef/Articles/System-DeclaredUniformTypeIdentifiers.html

Since the ImageView can only display one image at a time, the code also checks that the drop session contains only one item. If the drop target is able to handle multiple drop items in a single operation this check would not be performed.

Next, the sessionDidUpdate delegate needs to be implemented to notify the drag and drop system that the ImageView will perform a copy operation on any images dropped onto the view:

<pre>
func dropInteraction(_ interaction: UIDropInteraction, sessionDidUpdate
session: UIDropSession) -> UIDropProposal {

let location = session.location(in: self.view)
let dropOperation: UIDropOperation?

if session.canLoadObjects(ofClass: UIImage.self) {
if imageView.frame.contains(location) {
dropOperation = .copy
print("copy")
} else {
dropOperation = .cancel
}
} else {
dropOperation = .forbidden
}

return UIDropProposal(operation: dropOperation!)
}
</pre>

The code begins by identifying the location of the drop session on the screen before checking whether the session contains one or more image objects and, if so, checks that the location of the session falls within the area of the screen occupied by the imageView instance. If these criteria are met, a UIDropProposal object is returned by the method indicating that the view can handle the drop and will perform a copy operation on the image, otherwise the drop session is cancelled.

The final task in the process of adding drop support to the image view is to implement the performDrop delegate method. This method is called after the user drops the item onto the view and is responsible for handling the transfer from the drag source. It does this by calling the loadObjects method of the session object and processing the resulting items. In this case, since only a single image can be dropped into the view, the last image in the array is extracted, converted to sepia (if that option is selected) and displayed on the ImageView:

<pre>
func dropInteraction(_ interaction: UIDropInteraction,
performDrop session: UIDropSession) {

if session.canLoadObjects(ofClass: UIImage.self) {
session.loadObjects(ofClass: UIImage.self) { (items) in
if let images = items as? [UIImage] {

if self.sepiaFilter {
let sepiaImage = self.convertToSepia(image: images.last!)
self.imageView.image = sepiaImage
} else {
self.imageView.image = images.last!
}
}
}
}
}
</pre>

Before testing the app, add the convertToSepia method to the ViewController class so that it reads as follows:

<pre>
func convertToSepia(image: UIImage) -> UIImage {

let sepiaFilter = CIFilter(name: "CISepiaTone")

let cimage = CIImage(image: image)

sepiaFilter?.setDefaults()
sepiaFilter?.setValue(cimage, forKey: "inputImage")
sepiaFilter?.setValue(NSNumber(value: 0.8 as Float),
forKey: "inputIntensity")

let image = sepiaFilter?.outputImage

let context = CIContext(options: nil)

let cgImage = context.createCGImage(image!,
from: image!.extent)

return UIImage(cgImage: cgImage!)
}
</pre>

== Testing the Drop Behavior

Run the app and display it in a split view configuration alongside the Photos app as previously outlined in Figure 74-2. Drag an image from the Photos app and hold it over the ImageView in the DragAndDrop app. Note that this time the green + icon appears alongside the preview image indicating that the view is able to handle the image drop:


[[File:]]

Figure 74-4


Drop the image onto the image view and wait while the image is transferred from the Photos app and converted to sepia. Once completed, the converted image will appear within the image view as shown in Figure 74-5:


[[File:]]

Figure 74-5


Test the spring loaded button by dragging another image from the Photos app and moving it over the Sepia button. Hold the preview in position without dropping until the button flashes multiple times and the title changes to “Sepia Off”. Continue dragging the preview to the image view and perform the drop. With the sepia filter turned off the image should appear unfiltered.

Next, replace the Photos app with the Safari browser, select some text and attempt to drag and drop it into the ImageView. Based on the code in the sessionDidUpdate method, this time the forbidden icon should appear in the top right-hand corner of the preview image indicating that this view does not accept text:


[[File:]]

Figure 74-6


== Adding Drag Support to the Views ==

The next phase of this project is to add drag support to both the ImageView and TextView instances within the DragAndDrop app. The first step in this process is to add a UIDragInteraction object to the views. As with the drop delegate, the view controller class will also serve as the drag delegate. Within the ViewController.swift file, make the following modifications to the class declaration and the initialize method.

<pre>
class ViewController: UIViewController, UIDropInteractionDelegate,
UIDragInteractionDelegate {
.
.
func initialize() {

textView.layer.borderWidth = 1
textView.layer.borderColor = UIColor.lightGray.cgColor

imageView.layer.borderWidth = 1
imageView.layer.borderColor = UIColor.lightGray.cgColor

imageView.isUserInteractionEnabled = true
imageView.addInteraction(UIDropInteraction(delegate: self))

imageView.addInteraction(UIDragInteraction(delegate: self))
textView.addInteraction(UIDragInteraction(delegate: self))
}
.
.
}
</pre>

When a drag is initiated by the user, the system will make a call to the dragInteraction(itemsForBeginning) delegate method and will expect in return an array of UIDragItem objects containing the items to be transferred within the drag session to the drop target. Since the user interface contains two potential drag source views, code needs to be implemented within this delegate method to identify whether the drag is occurring in the ImageView or TextView instance. When called, the dragInteraction(itemsForBeginning) method is passed a UIDragInteraction object which, in turn, contains a copy of the view on which the drag interaction was initiated. With this knowledge, the code for the method can be implemented as follows:

<pre>
func dragInteraction(_ interaction: UIDragInteraction, itemsForBeginning
session: UIDragSession) -> [UIDragItem] {
if let textView = interaction.view as? UITextView {
let provider = NSItemProvider(object: textView.text! as NSString)
let item = UIDragItem(itemProvider: provider)
return [item]
} else if let imageView = interaction.view as? UIImageView {
guard let image = imageView.image else { return [] }
let provider = NSItemProvider(object: image)
let item = UIDragItem(itemProvider: provider)
return [item]
}
return []
}
</pre>

The method attempts to cast the interaction view to an ImageView and a TextView to identify the view type. If the view is an ImageView, the image is extracted from the view, placed in an NSProvider object which, in turn, is used to construct a UIDragItem instance. This is placed in an array and returned to the system. In the case of a TextView, similar steps are performed to return the text within the view.

== Testing the Drag Behavior ==

Run the app in split panel mode with the Photos app and drag and drop an image from the Photos app onto the ImageView in the DragAndDrop app. Touch the sepia image and wait for the view to lift up indicating that the drag has started. Drag the image preview to the Photos app and drop it. The sepia image will subsequently appear in the Photos app under the Today section.

Replace the Photos app in the split view pane with Safari and enter the URL for a web site into the TextView of the DragAndDrop app. Touch and hold over the TextView until the preview image appears, then drag and drop it into the Safari address bar at which point the URL referenced in the text should load into the browser.

== Customizing the Lift Preview Image ==

As currently configured, the preview image displayed during the drag is the default image. In both cases this is the snapshot of the view itself. Sometimes this is adequate but more often does not provide the best visual experience. Consider, for example, a photo in the ImageView in portrait orientation as is the case in Figure 74-7:


[[File:]]

Figure 74-7


In this situation, the preview image includes the white space on either side of the image as shown in Figure 74-8:


[[FIle:]]

Figure 74-8


A better experience for the user would be to create the preview using only the image being transferred. This can be achieved by implementing the previewForLifting delegate method. Within this method, we need to once again identify if this is a text or image view. In the case of the TextView, the method will create a UITargetedDragPreview object containing a reference to the TextView. This is essentially emulating the default behavior for the preview image. In the case of the ImageView, however, the image will be extracted from the view, placed within a new UIImageView instance and used to create a custom UITargetedDragPreview object:

<pre>
func dragInteraction(_ interaction:UIDragInteraction,
previewForLifting item:UIDragItem,
session:UIDragSession)
-> UITargetedDragPreview? {

let dragView = interaction.view!
let dragPoint = session.location(in: dragView)
let target = UIDragPreviewTarget(container: dragView,
center: dragPoint)

if (dragView as? UITextView) != nil {

return UITargetedDragPreview(view: dragView)

} else if let currentView = dragView as? UIImageView {

let previewImageView = UIImageView(image: currentView.image)
return UITargetedDragPreview(view: previewImageView,
parameters:UIDragPreviewParameters(),
target:target)
}
return nil
}
</pre>

This code requires some additional explanation. As previously stated, the objective is to return a UITargetedDragPreview object. At a minimum, this must contain a reference to the view which is to be displayed in the drag preview which is precisely the case for the TextView drag:

<pre>
return UITargetedDragPreview(view: dragView)
</pre>

In the case of the ImageView, the UITargetedDragPreview is created using an ImageView instance, a set of parameters in the form of a UIDragPreviewParameters object and a UIDragPreviewTarget instance:

<pre>
return UITargetedDragPreview(view: previewImageView,
parameters:UIDragPreviewParameters(),
target:target)
</pre>

In this example, no specific parameters are set on the UIDragPreviewParameters object though these parameters can be useful for configuring the background color of the preview, or for specifying a rectangle or Bézier path to define custom areas of the view to be displayed in the preview.

The UIDragPreviewTarget object is constructed using a reference to the view on which the drag was initiated together with the touch point within the containing view. In the above case, the location at which the drag was initiated is used as the center value (in other words the location of the user’s finger on the screen) and matches the default drag and drop behavior. Optionally, the UIDragPreviewTarget object may also be initialized with a CGAffineTransform argument which will be used to animate the preview target.

== Testing the Custom Preview Image ==

Run the app once again, drag an image from the Photos app onto the ImageView and then initiate a drag operation on that image. Note that the image now appears without the borders. Unfortunately, a new problem has occurred in that the preview image initially appears full size and extends beyond the edges of the screen. Once the drag moves, the system scales the image down, but the initial image size also needs to be reduced considerably. To do this, the image can be scaled within the previewForLifting delegate method:

<pre>
func dragInteraction(_ interaction:UIDragInteraction,
previewForLifting item:UIDragItem,
session:UIDragSession)
-> UITargetedDragPreview? {

let dragView = interaction.view!
let dragPoint = session.location(in: dragView)
let target = UIDragPreviewTarget(container: dragView,
center: dragPoint)

if (dragView as? UITextView) != nil {

return UITargetedDragPreview(view: dragView)

} else if let currentView = dragView as? UIImageView {

let previewImageView = UIImageView(image:
scaleImage(image: currentView.image!, width: 100))
return UITargetedDragPreview(view: previewImageView,
parameters:UIDragPreviewParameters(),
target:target)
}
return nil
}
.
.
func scaleImage (image: UIImage, width: CGFloat) -> UIImage {
let oldWidth = image.size.width
let scaleFactor = width / oldWidth

let newHeight = image.size.height * scaleFactor
let newWidth = oldWidth * scaleFactor

UIGraphicsBeginImageContext(CGSize(width:newWidth, height:newHeight))
image.draw(in: CGRect(x:0, y:0, width:newWidth, height:newHeight))
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage!
}
</pre>

With these changes implemented, the preview will appear in a smaller size when the drag is first initiated.

== Implementing Animation ==

The final task in this chapter is to implement some basic animation during the preview lifting phase. The goal will be to fade out the selected view for the duration of the drag session. This will involve changing the alpha value of the source view, and then restoring it when the drag session ends or is cancelled. This will require the addition of the following three delegate methods to the ViewController.swift file:

<pre>
func dragInteraction(_ interaction: UIDragInteraction, willAnimateLiftWith
animator: UIDragAnimating, session: UIDragSession) {
animator.addAnimations {
interaction.view?.alpha = 0.5
}
}

func dragInteraction(_ interaction: UIDragInteraction, item: UIDragItem, willAnimateCancelWith animator: UIDragAnimating) {
animator.addAnimations {
interaction.view?.alpha = 1.0
}
}

func dragInteraction(_ interaction: UIDragInteraction, session: UIDragSession, didEndWith operation: UIDropOperation) {
interaction.view?.alpha = 1.0
}
</pre>

Run the app one last time and verify that the source image within the DragAndDrop app fades during the drag operation and then returns to full brightness after the session ends.

== Summary ==

This chapter has worked through the implementation of drag and drop between apps. This included the implementation of methods for both the drag and drop delegates and adding support for spring loaded controls. The app also demonstrated the steps involved in loading transferred images and text, the customization of preview images during a drag operation and some basic animation techniques.